1 / 18

Testing the validity of indicators in the field of education The experience of CRELL

Testing the validity of indicators in the field of education The experience of CRELL Rome, October 3-5, 2012, Improving Education through Accountability and Evaluation: Lessons from Around the World Andrea Saltelli European Commission, Joint Research Centre. CRELL

cassia
Download Presentation

Testing the validity of indicators in the field of education The experience of CRELL

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing the validity of indicators in the field of educationThe experience of CRELL Rome, October 3-5, 2012, Improving Education through Accountability and Evaluation: Lessons from Around the World Andrea Saltelli European Commission, Joint Research Centre

  2. CRELL Centre for research on lifelong learning based on indicators and benchmarks DG Education and Culture+Joint Research Centre, since 2005 http://crell.jrc.ec.europa.eu/

  3. Focus Foci of econometric research at the JRC, Ispra • CRELL • Trajectories to achieving EU 2020 objs. • Employability and other benchmarks (mobility, multi-linguism) • Labour market outcomes

  4. Focus Foci of econometric research at the JRC, Ispra Counter Factual analysis and other Impact Assessment methodologies Regional Studies (com…petitiveness, innovation, well being) Composite indicators and social choice 20 August 2014 4

  5. Indicators

  6. Context:Knowledge in support to policy; evaluation and impact assessment, but also advocacy Caveat: Validity = plausibility, defensibility … and not ‘proof of truth’

  7. Sensitivity analysis When testing the evidence some reasonable people (and guidelines) suggest that ‘sensitivity analysis would help’. JRC fostered sensitivity analysis development and uptake (20 years of papers, schools and books). Today we call it sensitivity auditing and teach it within the syllabus for impact assessment run by the SEC GEN.

  8. Sensitivity analysis How coupled stairs are shaken in most of available literature How to shake coupled stairs

  9. Sensitivity analysis Testing (composite) indicators: two approaches Michaela Saisana, Andrea. Saltelli, and Stefano Tarantola (2005). Uncertainty and sensitivity analysis techniques as tools for the quality assessment of composite indicators. J. R. Statist. Soc. A168(2), 307–323. Paolo Paruolo, Michaela Saisana, Andrea SaltelliRatings and rankings: Voodoo or Science?, J. R. Statist. Soc. A, 176 (2), 1-26

  10. Sensitivity analysis First: The invasive approach Michaela Saisana, Béatrice d’Hombres, Andrea Saltelli, Rickety numbers: Volatility of university rankings and policy implications Research Policy (2011), 40, 165-177

  11. Robustnessanalysisof SJTU and THES

  12. SJTU: simulatedranks – Top20 • Harvard, Stanford, Berkley, Cambridge, MIT: top 5 in more than 75% of our simulations. • Univ California SF: original rank 18th but could be ranked anywhere between the 6th and 100th position • Impact of assumptions: much stronger for the middle ranked universities

  13. THES: simulatedranks – Top 20 • Impact of uncertainties on the university ranks is even more apparent. • M.I.T.: ranked 9th, but confirmed only in 13% of simulations (plausible range [4, 35]) • Very high volatility also for universities ranked 10th-20th position, e.g., Duke Univ, John Hopkins Univ, Cornell Univ.

  14. Sensitivity analysis Second: The non-invasive approach Comparing the weights as assigned by developers with ‘effective weights’ derived from sensitivity analysis.

  15. University Rankings Comparing the internal coherence of ARWU versus THES by testing the weights declared by developers with ‘effective’ importance measures.

  16. JRC fosters the development of good practices for the construction of aggregated statistical measures (indices, composite indicators). Partnerships with OECD, WEF, INSEAD, WIPO, UN-IFAD, FAO, Transparency International, World Justice Project, Harvard, Yale, Columbia … Sixty analyses (Michaela Saisana, JRC)

  17. Something worth advocating for (1): More use of social choice theory methods both for building meaningful aggregated indicators … (A pity that methods already available between the end of the XIII and the XV century are neglected by most developers) … they could be used more also in comparing options in the context of impact assessment studies.  course at JRC Ispra October 11-12

  18. Useful links: Econometrics and Applied Statistics Unit http://ipsc.jrc.ec.europa.eu/?id=155 Sensitivity Analysis: http://sensitivity-analysis.jrc.ec.europa.eu/ Sensitivity Auditing: http://sensitivity-analysis.jrc.ec.europa.eu/Presentations/Saltelli-final-February-1-1.pdf Quality of composite indicators: http://ipsc.jrc.ec.europa.eu/index.php?id=739

More Related