1 / 17

In-depth Evaluation of R&D Programs – how could it be accountable?

In-depth Evaluation of R&D Programs – how could it be accountable?. Symposium on International Comparison of the Budget Cycle in Research Development and Innovation Policies Madrid (Spain), 3-4 July 2008 OECD/GOV/PGC/SBO. Seung Jun Yoo, Ph.D. R&D Evaluation Center KISTEP, KOREA. Contents .

becca
Download Presentation

In-depth Evaluation of R&D Programs – how could it be accountable?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. In-depth Evaluation of R&D Programs – how could it be accountable? Symposium on International Comparison of the Budget Cycle in Research Development and Innovation Policies Madrid (Spain), 3-4 July 2008 OECD/GOV/PGC/SBO Seung Jun Yoo, Ph.D. R&D Evaluation Center KISTEP, KOREA

  2. Contents 1. Overview of Current Evaluation 2. Architecture of In-depth Evaluation 3. Procedure of In-depth Evaluation 4. Evaluation with Accountability 5. Challenges and Discussion

  3. Overview of Current Evaluation 1 - All R&D Programs (191) evaluated every year! - specific evaluation (mainly using checklists): 27 programs - self/meta evaluation : 164 programs - in-depth evaluation (4 horizontal programs (pilot run)) : climate change related R&D programs : university centers of excellence R&D programs : infrastructure (facility/equipments) R&D programs : genome research R&D programs

  4. Overview of Current Evaluation 2 - Efficiency & Effectiveness of Evaluation? - evaluating 191 programs every year? - efficiency & effectiveness of evaluation itself is questionable, considering characteristics of R&D programs.. - too much loads of evaluation to evaluators, program managers and researcher, etc. - not enough time to prepare and perform evaluation for all R&D programs and to communicate with stakeholders (→ might yield poor accountability?)

  5. Agency Agency Agency Agency Agency Architecture of In-depth Evaluation 1 - Main Players Decision maker for R&D Evaluation and Budget Allocation MOSF NSTC Evaluation Supporting Groups KISTEP (Evaluators) R&D programs of each ministry MIFAFF MOE MIK MW MEST …… *NSTC (National Science & Technology Council)

  6. Architecture of In-depth Evaluation 2 - Evaluation & Budget allocation Evaluation group formed R&D Budget Survey/Analysis Programs/Projects implemented In-depth Evaluation Input for budget allocation Programs Feedback To (re)plan and/or improve program

  7. Architecture of In-depth Evaluation 3 - Budget Process Ministry Budget Ceiling Program Budget 5 yr plan 2nd Budget Review with Evaluation Results 1st Budget Review Budget Committee of National Assembly Ministry of Strategy & Finance (MOSF) (Dec.) NSTC

  8. Procedure of In-depth Evaluation 1 - 7-month schedule (suggested!) - Selected by selection committee based on special issue, etc. (month 0) - In-depth evaluation procedure for selected program(s) - month 1 : form evaluation group, gather program(s) data, study target R&D program(s), find major evaluation points - month 2 : develop logic model (with system dynamics, etc.) - month 3/4 : perform in-depth analysis (relevance, efficiency, effectiveness, program design & delivery, etc)

  9. Procedure of In-depth Evaluation 2 - month 5 : interview (researchers, program managers, etc.) - month 6 : report interim evaluation result (MOSF, department(s)) - month 7 : report final evaluation result & recommendations

  10. Evaluation with Accountability 1 - Responsibility 1  balance between quantitative and qualitative evaluation is important - systematic approach for qualitative evaluation is challenging - program goals vs projects implemented vs output  enough time for evaluation is essential (7-month schedule) - to achieve the goal of evaluation with accountability - give enough time for PM to cope with evaluation process † suitable only for limited # of programs to evaluate

  11. Evaluation with Accountability 2 - Responsibility 2  qualitative assessment is needed to achieve the purpose of evaluation - measuring simple # of publications and patents? - publications : impact factor (1-2 yrs), citation index (more than 3 yrs) - patents : commercial purpose → technology value evaluation - selected projects with excellent performance : consistent funding is required irregardless of its program evaluation!

  12. Evaluation with Accountability 3 - Acceptability 1  understand well characteristics of program, sharing with stakeholders - performance indicators are useful tools to get stakeholders’ agreement - researchers, program managers, MOSF, etc. - to set up an evaluation strategy and points! - important especially for acceptability and for improving program delivery

  13. Evaluation with Accountability 4 - Acceptability 2(Understand & Change!)  communication with stakeholders - interview with stakeholders is important to increase accountability : researchers, program managers, MOSF : evaluation strategy would better to share at the beginning - number of interviews are also important : lack of understanding evaluation is key inhibitor for accountability! : interviews at major steps such as evaluation strategy, survey weak/strong point of the program, report interim evaluation results, etc.

  14. Challenges and Discussion 1 - Understand & Change & Improve! - Stakeholders should understand their program(s) : otherwise, rigid and too much defensive for keeping unchanged - Systematic way to understand diverse aspects of programs : goal, contents, projects, design & delivery, etc. - Sharing of program information to change : change and improve (to all stakeholders)

  15. Challenges and Discussion 2 - Scientific, Socio-economic Interest - Technology impact evaluation for socio-economic understanding - Results of technology level evaluation are also useful

  16. Challenges and Discussion 3 - Communication → Consultation - Communication among stakeholders (Ministry/Agency, Researchers, MOSF, KISTEP, etc.) - For better evaluation practices, communication should be transformed to a way of consultation

  17. Muchas gracias!

More Related