Jan Sadlak President, IREG Observatory on Academic Ranking and Excellence Symposium on Identifying Excellence and Diversity in International Higher Education: Rankings and Beyond Ecole Normale Superieure , 18 May 2011, Paris, France.
Jan SadlakPresident, IREG Observatory on Academic Ranking and Excellence
Symposium on Identifying Excellence and Diversity in International Higher Education: Rankings and Beyond
EcoleNormaleSuperieure, 18 May 2011, Paris, France
“Changing landscape” of HE
University rankings need to be seen in the context of “changing landscape” of higher education:
- Massification of student enrollment in higher education;
New landscape of higher education creates demand for greater accountability and transparency based on shared basis for evaluation as well as information which is readable by a large and diversified stakeholders about quality and performance of HE institutions and their activities.
Main instruments used for this purpose are:
a process during which HE institution or study program is a subject evaluation by competent body in order to establish if the given institution or programme meet a set of standards [minimal] with regard to teaching, research and services. This is why it is sometime interpreted as a kind of ‘social contract’, particularly in case of institutional accreditation.
three types of accreditation: institutional, study programs and
Multiple and professional accreditations as quality label:
in engineering: EUR-ACE/European Accreditation of Engineering Programs,
U.S. ABET Inc./Accreditation Board for Engineering and Technology
in business education: EQUIS/European Quality Improvement System of
the European Foundation for Management Development (EFMD), AMBA
of the Association of MBAs; AACSB International Accreditation of the
Association to Advance Collegiate Schools of Business [so-called “triple
In US: non-governmental, peer-review and usually a volunteer process.
In Europe: since adoption of the Bologna Process regime it is mostly
compulsory and growingly internationalized – a creation of the European
Quality Assurance Register (EQAR).
[internal organizational process which aims to improve the organization’s performance by learning about possible improvements of its primary and/or support processes by looking at these processes in other, better-performing organizations] It is a standard/reference point reflecting the best practice in
a given domain (ESMU).
It can be established externally or internally in order to:
serve as a diagnostic instrument to understand the process;
provide comparison with competition in order improve position your institution;
facilitate learning from other members of the “benchmarking club” [learning from peers];
influence setting up of the system-wide standards.
It is heavily relying on indicators. It does not produce ranking.
“Ranking” is an evaluation approach which purpose is to assess and display a comparative standing of whole institutions or certain domains of its performance on the basis of relevant data and information collected according to a pre-established methodology and procedures.
There are two types of ranking:
- One-dimensional, which goal is assessing performance of all institutions included in the ranking according to one set of indicators and identical weight attached to a given indicator [expressed as a percent of a total]. A consolidated result of such exercise is presented in ordinal form.
- Multi-dimensional, which also use one set of indicators to construct an overall list of performing institutions but its methodology enable users to weight indicators by using own criteria and preferences [it could be called la carte ranking].
to provide basis “to make informed choices” on the standing of higher education institutions for individual or group decision-making, i.e. students, parents, politicians, foundations, funding agencies, research councils, employers, international organizations;
to foster healthy competition among HE institutions [can be a useful tool against low quality HE institutions and “degree mills”];
to stimulate an emergence of centres of excellence;
to provide additional rationale for allocation of funds;
rankings are convenient tool for marketing and public relations.
Rankings in a way are speaking directly to wider audience over the head of corporative-oriented academe [faculty and students alike].
“Rankings have become an inevitable part of public life because universities
have moved to centre stage in all modern societies.”
Simon Marginson, Professor of Higher Education, University of Melbourne
The number and type of providers of ranking is quite diversified:
independent professional organizations
Questionable methodologies [with regard to weighting scales] and validity of data [especially when collected from surveys];
Biased towards the research productivity and encourages convergence towards a research-dominated model of HE institutions and reduces the system diversity [academic drift];
Introduces hierarchies among HE institutions and members of the academic community [not in line with an ethos of “academic corporatism”];
“Come off as well as possible in a ranking” can become an obsession adversely affecting governance and administration;
Hinders open-to-everyone academic cooperation and encourages cluster mentality;
Tend to focus on certain academic fields and not comprehensive performance of entire institution.
Recognize the diversity of institutions and take the different missions and goals of institutions into account;
Be transparent regarding the methodology used for creating the rankings;
Measure outcomes in preference to inputs whenever possible;
Use audited and verifiable data whenever possible;
Provide consumers with a clear understanding of all of the factors used to develop a ranking, and offer them a choice in how rankings are displayed.
The adoption by IREG in May 2006 of the Berlin Principles - 16 principles articulate several standards of good practice.
It organizes the bi-annual conferences:
IREG-5 – The Academic Rankings: From Popularity to Reliability and Relevance, 6-8 October 2010, Berlin
IREG-6 - Academic Rankings and Advancement of Higher Education: Lessons from Asia and other Regions, 18 - 22 April 2012, Taipei.
There are specific initatives:
- IREG Ranking Audit which purpose is to evaluate the coherence of university rankings to basic standards and quality criteria [based on the Berlin Principles on Ranking of HE Institutions] in order to:
- enhance the transparency about rankings;
- give users of rankings a tool to identify trustworthy rankings;
- improve the quality of rankings.
Rankings which obtain positive evaluation will be entitled to use the quality label “IREG approved”.
Mapping of Academic Excellence – International Scientific Awards
[project to be completed in 2012].
In a field as error-strewn as statistical evidence of academic
quality caution is always wise. It needs to be repeatedly said
that ranking can only be a proxy reflection of the complex
work of higher education enterprise.
- do not subject the mission of the university or even more so
that of higher education system to “tyranny of seeking status”;
- there is also a possibility of “great scholars and not so great
universities” try to find them;
- it is irresponsible to take any personal or institutional solely based on rankings;
Institutions and their leaders need to come to terms with new
landscape of HE in which competition is more evident than before but
if they are to compete [nationally or internationally] be inspired but
do not be captivated by their position in rankings.