1 / 29

Baltic Sea Region University Network Marketing and Networking for Internationalization Seminar at Vilnius University, 25

Baltic Sea Region University Network Marketing and Networking for Internationalization Seminar at Vilnius University, 25 November 2011. Purpose and principles of review. Addresses the most popular global university rankings

jennelle
Download Presentation

Baltic Sea Region University Network Marketing and Networking for Internationalization Seminar at Vilnius University, 25

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Baltic Sea Region University Network Marketing and Networking for Internationalization Seminar at Vilnius University, 25 November 2011

  2. Purpose and principles of review • Addresses the most popular global university rankings • Providing universities with analysis of the methodologies, not judging or ranking the rankings themselves • Only publicly available and freely accessible information was used • Efforts were made to discover • what is actually measured, • how the scores for indicators are calculated • how the final scores are calculated, and • what the results actually mean.

  3. Selection of rankings • ShanghaiRanking • Times Higher– • QS (until 2009) • Thomson Reuters • US N&WR/QS • Reitor (Рейтор) ------------------ • Leiden Ranking • Taiwan Ranking • University ResearchAssessment --------------------- • CHE/die Zeit • U-Map classification • U-multirank • AHELO ------------- • Webometrics

  4. Global rankings cover not more than 3-5% of world’s universities

  5. Decrease of scores within the Top 400 universitiesHow big can be the scores of reaming for 16’600 universities?

  6. Indicators covering elite research universities only • “Quality of faculty” = staff winning Nobel prizes (ARWU, Reitor) • “Highly Cited” = belonging to worlds Top 200 in 21 areas, i.e. 4200 altogether (ARWU) • “Peer review” = nominating 30 best universities from pre-selected list (THE-QS and other QS-based rankings) • Reputation survey(s) = nominating 30 best (THE-QS, USN&WR, THE-TR) • Universities considered: selection from elite group of universities: ARWU, THE, Reitor, Leiden

  7. Indicator scores are usually notthe indicator values themselves Each indicator has a dimension or denominator, e.g.: articles count, staff numbers, citations per academic To make indicator scores dimensionless, either • values are usuallyexpressed as percentage of the result of the “best” university • Z-score is anotheroption (difference between the currentmeasure and the mean value divided by standard deviation)

  8. Composite scores always contain rankers’ subjective view of quality • In all cases where a composite score is calculated from several indicators, ranking providers assign weights to each indicator in the overall score. • This means that the ranking provider’s subjective judgement determines which indicators are more important (e.g. citations – 10%, reputation – 40%) • In other words, the composite score reflects the ranking provider’s concept of quality.

  9. Choosing between simple counts or relative values is not neutral • Usingabsolute valuesranking favours large universities • Usingrelative values rankingallowssmallbutefficientuniversities compoetewithlargeones. • Predominantly using absolute numbers are, e.g. ARWU (Shanghai) and Webometrics • HEEACT (Taiwan), THE-QS and THE-TR mainly use relative values (except for reputation surveys). • Leiden Universityoffers both size-dependent and size-independent Rankings/indicators

  10. Rankingsandthe research mission of universities: indicators • Publication count SCI &SSCI, Scopus: - production • Publication count in Nature & Science - excellence • Publications per staff - staff research productivity • Citations (count) – overall force of HEI • Citations - per paper or per staff - impact • Citations to articles in the top impact journals – excellence • Research income (bycompetitionordirectallocation) • Research reputation surveys

  11. RankingsandtheTeaching mission HEISIndicators: • Alumni that have been awarded a Nobel Prize • Staff/Student ratio • Teaching reputationsurveys • Teaching income • Dropout rate • Time to degree • PhD/ undergraduateratio All of theaboveare distant proxies,some strongly questionable • Learningoutcomes – arewethereyet?

  12. BIASES AND FLAWSNatural sciences and medicine vs. social sciences bias Bibliometric indicators primarily cover journal publications BUT • while natural and life scientists primarily publish in journals, • Engineering scientists - in conference proceedings,prototypes • Social scientists and humanists – in books/ monographs

  13. Several indicators count by ISI21 broad areas:

  14. Different publication and citation cultures in different fields • Table from presentation of Cheng at IREG 2010 conference in Berlin

  15. Fieldnormalisation – solutions and issues Field-normalised citations per publication indicator (Leiden ‘Crown indicator’) Ciis the number of citations of the publicationi eiis the expected number of citations of publication i given the field and the year Criticisms – prefers older publications, –blurs the picture

  16. Mean-normalisation – solutions and issues • New attempt (2010) - mean-normalised citation score (MNCS) • Good idea, but: now the results are unstable for the very newest publications (es changerapidly) • To avoid the new flaw, a modified MNCS2 indicator is used which leaves out publications of the last year • But after all, it just improves mathematics, not the issue that WoS and Scopus insufficiently cover books

  17. ‘Peer review’ biases and flaws • Why calling reputation surveys “Peer reviews”? • ‘Peers’ are influenced by previous reputation of the institution (including positions in other rankings) just trynominating 30 universitiesyouknowasbest in teachingin yoursubject… • Limiting the number of universities nominated (THE, QS based rankings) makes approach elitist – and strengthens previous reputation dependence • Using pre-selected lists rather than allowing ‘peer’s’ free choice results in leaving out huge numbers of institutions • Is 5% response rate a sufficient result?

  18. Risks of overdoing • Even keeping current position in ranking requires great effort (Redqueeneffect, J.Salmi, 2010) • Rankings encourage universities to improve scores • Universities are tempted to improve performance specifically in areas measured inrankings • There arerisks that universities will concentrate funds and efforts onscoresandpay less attention to issues that are not rewarded in ranking scores such as: quality of teaching, regional involvement, widening access, lifelong learning, social issues of students and staff etc.

  19. Rankings and reforms in the EHEA You will not be rewarded in rankings for • improving access to next cycle , • establishing internal quality culture in universities, implementing ESG, • linking credits and programmes with learning outcomes, • establishing qualifications frameworks, • improving recognition of qualifications and credits, • establishing flexible learning paths for LLL, • establishing recognition of non-formal and informal learning, • improving social conditions of students, • making HE more accessible

  20. Directabuses • merging universities just to get onto league tables • standardised test scores of applicants • numberofacademicstaff • student/staff ratio – usingdifferent definitions of staff and students, the ratio couldbe between 6:1 to 39:1) • facultysalary – just planwhenyoushouldpay • reputation survey by students – tellstudents to lie • Nobellaureates – hirethem • More ciations? – fundmedicinenothumanities • Want to move a university 52 positionsup in thetable? • Want to usecompletelydifferentindicatosthanannounced? Goahead…

  21. Can rankings be improved? • There will be no improvement from extending 5 distant proxies to 25 – they will still remain proxies... • Improve coverage of teaching – most probably through measuring learning outcomes, • Lift biases, eradicate flaws of bibliometric indicators: field, language, regional, but first of all – address non-journal publications properly! • Change rankings so that they in reality help students to make their choices. • Addressing elite only, ranking results impact life all universities – it is time to produce rankings that cover all universities!

  22. Informing student choices – CHE university rankings

  23. Regional Teachingand learning International Student profile Knowledgeexc Research U-map

  24. The new developments: U-map • U-Map has two visualisation tools allowing to classify HEIs and to make detailed comparison of selected HEIs. Source: U-map

  25. The new developments: U-Multirank • U-Multirank is a multidimensional ranking including all aspects of an HEI’s work – education, research, knowledge exchange and regional involvement. • No composite score is produced. Has to be seen in future: • how well self-reported and student satisfaction data will work in international context, • whether other parties will turn Multirank into a league table and what will be the consequences

  26. The new developments: AHELO • OECD’s AHELO project is an attempt to compare HEIs internationally on the basis of actual learning outcomes. • Three testing instruments will be developed: • one for measuring generic skills and • two for discipline-specific skills, in economics and engineering. • Questions yet to be answered are: whether it is possible to develop instruments to capture learning outcomes that are perceived as valid in diverse national and institutional contexts.

  27. Main conclusions 1. Since arrival of global rankings then universities cannot avoid national and international comparisons, and this has causedchanges inthe way universities function. 2. Criteria that are appropriate for the top research universities onlyareappliedforjudgingallunkversities 3. Rankings so far cover only some ofuniversity missions.   4. Rankings, it is claimed, make universities more ‘transparent’. However, the methodologies, especially those of the most popular league tables, still lack transparency themselves.

  28. Each in his own opinion Exceeding stiff and strong, Though each was partly in the right, And all were in the wrong! by John Godfrey Saxe (1816–1887)

  29. Thanks for your attention

More Related