University rankings as component of quality challenge in a changing landscape of higher education
1 / 12

University Rankings as Component of Quality Challenge in a Changing Landscape of Higher Education - PowerPoint PPT Presentation

  • Uploaded on

Jan Sadlak President, IREG Observatory on Academic Ranking and Excellence Symposium on Identifying Excellence and Diversity in International Higher Education: Rankings and Beyond Ecole Normale Superieure , 18 May 2011, Paris, France.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'University Rankings as Component of Quality Challenge in a Changing Landscape of Higher Education' - luz

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
University rankings as component of quality challenge in a changing landscape of higher education

Jan SadlakPresident, IREG Observatory on Academic Ranking and Excellence

Symposium on Identifying Excellence and Diversity in International Higher Education: Rankings and Beyond

EcoleNormaleSuperieure, 18 May 2011, Paris, France

University Rankings as Component of Quality Challenge in a Changing Landscape of Higher Education

“Changing landscape” of HE

University rankings need to be seen in the context of “changing landscape” of higher education:

- Massification of student enrollment in higher education;

  • Number of HE institutions and their “diversification” (type of institutions, study programs, modes of learning and teaching);

  • Changes in relation between the State and higher education [acting on behalf of variety of stakeholders] - Shift from an academic-self-governance to “accountable autonomy”;

  • Passing from manufacture to industrial mode of production and dissemination of research [“knowledge based economy” and creativity];

  • Impact of NIT;

  • From “Internationalization” to “Globalization”;

  • “Quality challenge”: shift from self-declaration to external verification of quality.


Quality challenge
Quality Challenge

New landscape of higher education creates demand for greater accountability and transparency based on shared basis for evaluation as well as information which is readable by a large and diversified stakeholders about quality and performance of HE institutions and their activities.

Main instruments used for this purpose are:



Rankings/League tables.


a process during which HE institution or study program is a subject evaluation by competent body in order to establish if the given institution or programme meet a set of standards [minimal] with regard to teaching, research and services. This is why it is sometime interpreted as a kind of ‘social contract’, particularly in case of institutional accreditation.

three types of accreditation: institutional, study programs and


Multiple and professional accreditations as quality label:

in engineering: EUR-ACE/European Accreditation of Engineering Programs,

U.S. ABET Inc./Accreditation Board for Engineering and Technology

in business education: EQUIS/European Quality Improvement System of

the European Foundation for Management Development (EFMD), AMBA

of the Association of MBAs; AACSB International Accreditation of the

Association to Advance Collegiate Schools of Business [so-called “triple

crown ”accreditation].

In US: non-governmental, peer-review and usually a volunteer process.

In Europe: since adoption of the Bologna Process regime it is mostly

compulsory and growingly internationalized – a creation of the European

Quality Assurance Register (EQAR).


[internal organizational process which aims to improve the organization’s performance by learning about possible improvements of its primary and/or support processes by looking at these processes in other, better-performing organizations] It is a standard/reference point reflecting the best practice in

a given domain (ESMU).

It can be established externally or internally in order to:

serve as a diagnostic instrument to understand the process;

provide comparison with competition in order improve position your institution;

facilitate learning from other members of the “benchmarking club” [learning from peers];

influence setting up of the system-wide standards.

It is heavily relying on indicators. It does not produce ranking.


“Ranking” is an evaluation approach which purpose is to assess and display a comparative standing of whole institutions or certain domains of its performance on the basis of relevant data and information collected according to a pre-established methodology and procedures.

There are two types of ranking:

- One-dimensional, which goal is assessing performance of all institutions included in the ranking according to one set of indicators and identical weight attached to a given indicator [expressed as a percent of a total]. A consolidated result of such exercise is presented in ordinal form.

- Multi-dimensional, which also use one set of indicators to construct an overall list of performing institutions but its methodology enable users to weight indicators by using own criteria and preferences [it could be called la carte ranking].

Rankings why it is done and why they are popular
Rankings: Why it is done [and why they are popular]

to provide basis “to make informed choices” on the standing of higher education institutions for individual or group decision-making, i.e. students, parents, politicians, foundations, funding agencies, research councils, employers, international organizations;

to foster healthy competition among HE institutions [can be a useful tool against low quality HE institutions and “degree mills”];

to stimulate an emergence of centres of excellence;

to provide additional rationale for allocation of funds;

rankings are convenient tool for marketing and public relations.

Rankings in a way are speaking directly to wider audience over the head of corporative-oriented academe [faculty and students alike].

“Rankings have become an inevitable part of public life because universities

have moved to centre stage in all modern societies.”

Simon Marginson, Professor of Higher Education, University of Melbourne

Who does rankings
Who does rankings:

The number and type of providers of ranking is quite diversified:

governmental agencies

independent professional organizations

accrediting bodies

funding organizations

individual/group initiatives

academics themselves

international organizations

media themselves.

Criticisms of rankings
Criticisms of rankings

Questionable methodologies [with regard to weighting scales] and validity of data [especially when collected from surveys];

Biased towards the research productivity and encourages convergence towards a research-dominated model of HE institutions and reduces the system diversity [academic drift];

Introduces hierarchies among HE institutions and members of the academic community [not in line with an ethos of “academic corporatism”];

“Come off as well as possible in a ranking” can become an obsession adversely affecting governance and administration;

Hinders open-to-everyone academic cooperation and encourages cluster mentality;

Tend to focus on certain academic fields and not comprehensive performance of entire institution.

University rankings smarten up
“University rankings smarten up”

Recognize the diversity of institutions and take the different missions and goals of institutions into account;

Be transparent regarding the methodology used for creating the rankings;

Measure outcomes in preference to inputs whenever possible;

Use audited and verifiable data whenever possible;

Provide consumers with a clear understanding of all of the factors used to develop a ranking, and offer them a choice in how rankings are displayed.

Ireg observatory
IREG Observatory

The adoption by IREG in May 2006 of the Berlin Principles - 16 principles articulate several standards of good practice.


It organizes the bi-annual conferences:

IREG-5 – The Academic Rankings: From Popularity to Reliability and Relevance, 6-8 October 2010, Berlin

IREG-6 - Academic Rankings and Advancement of Higher Education: Lessons from Asia and other Regions, 18 - 22 April 2012, Taipei.

There are specific initatives:

- IREG Ranking Audit which purpose is to evaluate the coherence of university rankings to basic standards and quality criteria [based on the Berlin Principles on Ranking of HE Institutions] in order to:

- enhance the transparency about rankings;

- give users of rankings a tool to identify trustworthy rankings;

- improve the quality of rankings.

Rankings which obtain positive evaluation will be entitled to use the quality label “IREG approved”.

Mapping of Academic Excellence – International Scientific Awards

[project to be completed in 2012].


Few advices
Few advices

In a field as error-strewn as statistical evidence of academic

quality caution is always wise. It needs to be repeatedly said

that ranking can only be a proxy reflection of the complex

work of higher education enterprise.


- do not subject the mission of the university or even more so

that of higher education system to “tyranny of seeking status”;

- there is also a possibility of “great scholars and not so great

universities” try to find them;

- it is irresponsible to take any personal or institutional solely based on rankings;

Institutions and their leaders need to come to terms with new

landscape of HE in which competition is more evident than before but

if they are to compete [nationally or internationally] be inspired but

do not be captivated by their position in rankings.