1 / 18

Enhancing the PBRF: Options for Post 2012

Enhancing the PBRF: Options for Post 2012. Jonathan Boston Institute of Policy Studies School of Government Victoria University of Wellington Presentation to Forum on “Measuring Research Performance: What are the Options?” 18 September 2008, Wellington. Outline.

ciro
Download Presentation

Enhancing the PBRF: Options for Post 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enhancing the PBRF: Options for Post 2012 Jonathan Boston Institute of Policy Studies School of Government Victoria University of Wellington Presentation to Forum on “Measuring Research Performance: What are the Options?” 18 September 2008, Wellington

  2. Outline • The background to the PBRF • The reasons for selecting a mixed-model • Why consider other performance-based funding options post-2012? • Criteria for evaluating post-2012 options • Post-2012 options • Conclusions

  3. Background • The PBRF is a ‘mixed model’ in the sense that it combines both peer review and performance indicators for assessment and funding purposes • The government’s decision in 2002 to embrace a mixed model was based on the recommendations of TEAC in 2001 • These recommendations were, in turn, the product of detailed consultation with the sector and evaluation of overseas models, especially the performance indicator approaches in Australia and Israel and the peer review systems in the UK and Hong Kong (see Shaping the Funding Framework, ch. 10) • TEAC acknowledged that both approaches had strengths and weaknesses

  4. Background Indicator models – some key issues • Which indicators to select? Many options: • External research income (competitive and non-competitive) • Research degree completions (doctoral and non-doctoral) • Research outputs/publications (type, volume, citations, etc.) • What weightings to place on the selected indictors for funding and reporting purposes? (cf. Israeli model v Australian IGS and RQ) • Israel 55/30/15; Australia 60/30/10 • Note the problem of using such weightings in NZ • What proportion of total public funding of tertiary institutions should be allocated via a PBF system? (small % in Australia, relatively large % in Israel)

  5. Background Peer review models – key issues: • Number (and nature) of expert panels – UK large number, disciplinary-specific v HK 12 multi-disciplinary panels (reflecting the very different size and nature of these tertiary systems) • Unit of assessment: academic unit (UK) v individuals (HK) • Range and type of evidence supplied to panels (influenced by the unit of assessment) • Frequency and cost of assessments • Staff eligibility criteria, & voluntary v compulsory participation • Reporting of results: rating categories v profiles, confidentiality issues, etc. • Funding formula (degree of selectivity/gradient, cut-offs, etc.) and volumes

  6. Reasons for a mixed model • TEAC rejected a pure indicator model because of concerns about: • the distributional implications of a model that relied heavily on ERI measures • the reliability of using research output and impact measures as proxies for research quality • the potential distortionary impacts of using output and impact measures (e.g. disciplinary differences, etc.) • TEAC rejected a pure peer review model because of concerns about: • The practical difficulties, and costs, of replicating a UK-type model in a small tertiary system. This led to a focus on the scaled-down HK model, using individual staff as the unit of assessment; but this led, in turn, to the idea of a mixed model incorporating both peer review and performance indicators, because it would not be possible to link ERI and RDC directly to individual staff • The positive case for a mixed model: • Minimise the risks (to institutions, disciplines and researchers) associated with relying on a single assessment and funding approach • Incorporating indicators separately from periodic peer reviews would provide ongoing incentives to lift research performance and other specific behaviours • Against this, a mixed model implied potentially higher transaction costs

  7. The rationale for considering other policy options for the post-2012 period • No PBF/PBRF policy framework is perfect: it is thus desirable to have ongoing monitoring, review, critical evaluation and fresh thinking • There will be various changes to the current PBRF model for the 2012 round, but there is a case for considering more significant changes after this: • PBRF systems all have the potential to create distortions and generate gaming; regular adjustments may be necessary to mitigate these risks (but note the problems associated with constant changes of the rules) • NZ is part of a wider international research community; we need to give proper attention to overseas policy developments and lessons – there may also be possibilities for collaboration and joint arrangements • There is continuing concern about using individuals as the unit of assessment

  8. Recent overseas developments • UK: Research Excellence Framework (REF) • March and December 2006: announcements of proposals to introduce a new unified framework for assessing and funding university research after the 2008 RAE funding cycle, based primarily on metrics with a ‘light touch’ of peer review (what is ‘light’?) • The REF will be based on a mix of bibliometric indicators (e.g. average citation counts per unit, department or institution, etc.) and other metrics (especially research funding and research postgraduate training), moderated by expert subject panels, varied by disciplinary considerations • A new funding formula will allocate funds to institutions (via a block grant) • The REF is being tested and piloted (in 20+ institutions), with the aim of being phased in from 2011-12, becoming fully operational in 2014 • Much debate over the REF: • The nature, number and weighting of the various metrics • Managing the large differences in disciplinary characteristics • The role of expert panels (in policy design and assessment) • The funding formula and distributional impacts

  9. Recent overseas developments • Australia: Excellence in Research for Australia (ERA) Initiative • Long debate over how to fund research in Australian universities and, in particular, how to replace the Institutional Grants Scheme (IGS); vigorous debate over the Research Quality Framework (RQF) • June 2008: ERA Consultation Paper released by ARC • Features of ERA: • Covers research in all HEIs • Broad definition of ‘research’ and many publication types • 8 disciplinary clusters • Evaluations based on combination of indicators, with expert panels • 3 categories of indicators proposed for consideration (18+ in all): • Measures of research activity and intensity (8+) • Indicators of research quality (4+) • Indicators of excellent applied research and translation of research outcomes (6+) • Advice being sought from the Indicators Development Group

  10. Recent overseas developments • Australia: Excellence in Research for Australia (ERA) Initiative • Reporting – emphasis on intensity and quality profiles (e.g. for units, institutions, disciplines, etc.), rather than single indicators (but separate profiles are likely to be aggregated in practice to create overall rankings) • ERA will eventually inform the allocation of research block grants, and provide quality assurance (but how?) • Trial of 2 disciplinary clusters in 2009 (physics, chemistry and earth sciences; humanities and creative arts)

  11. Recent overseas developments • Germany: Universities Excellence Initiative • Federal initiative – Ministry of Education and German Research Foundation • Aims: • Improve research quality • Support top-level university research and improve the international standing of German universities • Encourage cooperation between disciplines and institutions, etc. • 3 funding streams (2 billion euros over 5 years – only 20% of RAE): • Graduate schools for fostering young researchers (40) • Clusters of excellence for promoting high-quality research (30) • Institutional strategies for advancing high quality university research (9 universities designated as “excellent”) • Selections based on advice of expert panels, moderated by political (e.g. geographic) considerations

  12. Criteria for evaluating post-2012 options • Any new approach should, ideally, be superior (in net terms) to the current PBRF model • Relevant considerations include: • Cost-effectiveness (lower administrative and compliance costs) • Stronger incentives for research excellence (validity, credibility, etc.) • Equitable treatment of different disciplines • Lower net distortions and unintended negative impacts (including issues relating to staff morale and privacy, new and emerging researchers, applied research, technology transfer, NZ-oriented research, etc.) • Comprehensive, integrated model rather than a pot-pourri of separate models for different disciplines • Better performance information (for various users) – timeliness relevance, etc. • Enhanced funding predictability for institutions • High stakeholder support (especially from academic staff) • Ideally, enabling comparisons of performance overtime and across jurisdictions (for benchmarking purposes, discerning trends, etc. – but note all the problems – FTE denominator) • Any new scheme may fare better on some criteria than on others

  13. Post-2012 options • Retain the PBRF in its current form (with only minor adjustments) • Retain mixed model but increase the number (and weighting?) of metrics/indicators • Replace the mixed model: • Rely on metrics (with light-handed moderation by a peer review process) • Move to a full peer review model, but incorporate metrics into the peer review process (implies using academic/disciplinary groupings as the unit of assessment rather than individuals)

  14. Post-2012 options: Metrics (3A) General points: • Other contributors to this forum will cover the relevant issues in much more detail • Complex area – many broader policy issues and specific design issues – but note that many of these arise for PBRF systems irrespective of their type • The devil is in the detail – there are problems with all the available bibliometric and non-bibliometric performance indicators, but some indicators are better proxies for research quality than others (see work of Evidence Ltd, etc.) • The small size of many NZ tertiary institutions and disciplinary groups within these institutions poses certain challenges • Note Goodhart’s Law – once an indicator or other surrogate measure is made a target for the purpose of conducting social or economic policy, it tends to lose the information content that originally qualified it to play such a role • Best to proceed with caution

  15. Moving to Metrics: Some Design Issues • What particular metrics/indicators should be used to assess performance (generic and disciplinary specific)? • For broad indicators, like external research income and publication types, how many sub-categories should there be? • What time period(s) should be used for data collection, and how should this vary across the various metrics? • How should staff eligibility be determined (critical for normalisation of data)? • How should publications/outputs be attributed – given multiple authorship, movement of staff between institutions, etc.? • How should overall ratings be produced – e.g. how should the various bibliometric indicators be weighted, how should these and other indicators be weighted, and how should these weightings vary across disciplines? • How should the various metrics be translated into an overall quality rating, and how should such ratings inform funding allocations (i.e. What formula should be used? What kind of peer moderation should there be? What funding gradations should there be?) • How should the various results be reported? • A personal note re. citations

  16. Design Issues For every issue, there are many sub-issues and a range of options, all with slightly different impacts: • If journals are to be ranked, how many categories should there be? What journals should be included? What data bases should be used (note large variation in ISI coverage of publications across disciplines)? • What use, if any, should be made of web-based metrics (‘Google Scholar’, etc.)?

  17. Design Issues Funding formula and allocations: • Current PBRF component weightings: 60% (QE), 25% (RDC), 15% ERI • Would a weighting as high as 60% for an index based on bibliometric (and related) indicators be acceptable? If not, what should the component weightings be? Note the implications of increasing the current ERI weighting • What would be the distributional impact of moving from the current QE arrangements to a bibliometric-based index be? And how would this affect behaviour across the tertiary education sector?

  18. Conclusions • Important to explore alternatives to the current PBRF mixed model, but need to proceed cautiously (evidence-based) • There will be costs in changing to a new system • Metric-based systems appear to have both advantages and disadvantages relative to current arrangements • NZ has at least 5-6 years to learn from the experience of the UK REF and the Australia ERA before seriously considering whether to embrace a new model

More Related