1 / 20

RESEARCH EVALUATION: chasing indicators of excellence and impact

RESEARCH EVALUATION: chasing indicators of excellence and impact. Evidence Thomson Reuters UHMLG Preparing for research assessment, Royal Society of Medicine JONATHAN ADAMS, Director Research Evaluation 07 MARCH 2011. WE HAVE TO RESPOND TO GLOBAL CHALLENGES.

benson
Download Presentation

RESEARCH EVALUATION: chasing indicators of excellence and impact

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RESEARCH EVALUATION: chasing indicators of excellence and impact Evidence Thomson Reuters UHMLG Preparing for research assessment, Royal Society of Medicine JONATHAN ADAMS, Director Research Evaluation 07 MARCH 2011

  2. WE HAVE TO RESPOND TO GLOBAL CHALLENGES

  3. RESEARCH ASSESSMENT PROVIDES US WITH INFORMATION TO DO THAT • Global challenges and dynamism • Economic turbulence and threats to public resourcing in all sectors • Scarce resources • must be distributed selectively • in a manner that is equitable • and maintains academic confidence • But what are our criteria? • What is research quality? • What is excellence? • What is impact?

  4. WE CANNOT DIRECTLY ASSESS WHAT WE WANT TO KNOW Conventionally, this problem is addressed by expert and experienced peer review. Peer review is not without its problems. Peer review of academic research tends to focus on academic impact, so other forms of impact require merit review. Expert review may be opaque to other stakeholders. Objectivity is addressed by introducing quantitative indicators.

  5. INDICATORS, NOT METRICSIt’s like taking bearings from your yacht A single indicator is not enough A single indicator is not enough Good combinations of indicators take distinctive bearings, or differing perspectives across the research landscape A single indicator is not enough Good combinations of indicators take distinctive bearings, or differing perspectives across the research landscape They are unlikely to agree completely, which gives us an estimate of our uncertainty

  6. PRINCIPLES OF QUANTITATIVE RESEACH EVALUATION First, note that there are no absolutes; it’s all relative Impact may be local, national or international; we need benchmarks to make any sense of a number Are the proposed data relevant to the question? Can the available data address the question? What data do we have that we can use ... ?

  7. RESEARCH PERFORMANCE INDICATORS COME FROM THE RESEARCH PROCESS

  8. WE CAN EXTEND THIS OVER THE WHOLE CYCLE (activities are then not synchronous)

  9. WE HAVE A WIDE RANGE OF DATA AND POTENTIAL INDICATORS Note that all these data points are characterised by: Location – where the activity took place; Time – when the activity took place; Discipline – the subject matter of the activity All these should be taken into account in evaluation

  10. PRINCIPLES OF QUANTITATIVE RESEACH EVALUATION • Are the proposed data relevant to the question? • Can the available data address the question? • Are we comparing ‘like-with-like’? • Can we test outcomes by using multiple indicators? • Have we ‘normalised’ our data? • Consider relative values, not absolute values • Do we understand the characteristics of the data? • Are there artefacts in the data that require editing? • Do the results appear reasonable?

  11. HOW CAN WE JUDGE POSSIBLE INDICATORS? • Relevant and appropriate • Are indicators correlated with other performance estimates? • Do indicators really distinguish ‘excellence’ as we see it? • Are these the indicators the researchers would use? • Cost effective • Data accessibility, coverage, cost and validation • Transparent, equitable and stable • Are the characteristics and dynamics of the indicators clear? • Are all institutions, staff and subjects treated equitably? • How do people respond? Can they manipulate indicator outcomes? • “Once an indicator is made a target for policy, it starts to lose the information content that initially qualified it to play such a role” (Goodhart’s Law)

  12. COMMUNITY BEHAVIOUR HAS RESPONDED TO EVALUATION

  13. WHY BIBLIOMETRICS ARE A POPULAR SOURCE OF RESEARCH INDICATORS • Publication is a universal characteristic of academic research and provides a standard ‘currency’ • Citations are a natural part of academic behaviour • Citation counts are associated with academic ‘impact’ • Impact is arguably a proxy for quality • Data are accessible, affordable and increasingly international – though there is subject imbalance • Data characteristics are well understood and widely explored • Citation counts grow over time • Citation behaviour is a cultural characteristic, which varies between fields • Citation behaviour may vary between countries

  14. CITATION COUNTS GROW OVER TIME AND RATES VARY BETWEEN FIELDS

  15. PAPERS ARE MORE LIKELY TO BE CITED OVER TIME

  16. RAW CITATION COUNTS MUST BE ADJUSTED USING A BENCHMARK • First, we need to separate articles and reviews • Then ‘normalise’ the raw count by using a global reference benchmark • Take year of publication into account • Take field into account • But how do we define field? • Projects funded by a Research Council • Departments which host a group of researchers • Journal set linked by citations • Granularity • Physiology – Life science – Natural sciences

  17. NORMALISED CITATION IMPACT CORRELATES WITH PEER REVIEW (Chemistry data) Methodology affects the detail but not the sense of the outcome

  18. THIS IS MOSTLY ABOUT EXCELLENCE: WHAT IS IMPACT? • Research excellence might be termed ‘academic impact’ • Other forms of impact for which we legitimately may seek evaluation are • Economic impact • Social impact • Quantitative research evaluation traces its origins back to the 1980s • The DTI spent much money in the 1990s failing to index economic impact • It is difficult to track many research innovations through to a new product or process, or vice versa • Links are many-to-many and time-delayed • Social impact is difficult to define or capture

  19. CHASING IMPACT • Eugene Garfield originally talked about citation counts as an index of ‘impact’, fifty years ago • Current focus on economic and social impact should be seen as a serious engagement with other modes of recognising and demonstrating the value of original and applied research • Of course • The objectives are undefined, which undermines any evaluation • It is easier to do this in some subjects than others • Much of the current material is anecdotal • It is difficult to validate without indicators • But a start has been made • The principles should follow those of research evaluation • There must be ownership by the disciplinary communities

  20. RESEARCH EVALUATION: chasing indicators of excellence and impact Evidence Thomson Reuters UHMLG Preparing for research assessment, Royal Society of Medicine JONATHAN ADAMS, Director Research Evaluation 07 MARCH 2011

More Related