230 likes | 361 Views
This study evaluates the impact assessment culture in international agricultural research centers, discussing challenges, methods, and external reviews. It explores the goals and indicators of performance monitoring systems within the CGIAR system. The text also addresses factors pushing against change and factors pushing for change in the evaluation of research initiatives in agriculture. The conclusion emphasizes the need for multiple approaches to impact assessment in complex systems with demands from various sources.
E N D
Evaluating Scientific Research Initiatives The Case of the International Agricultural Research Centers Leslie J. Cooksy, Ph.D. University of California – Davis ljcooksy@ucdavis.edu
Development of an impact assessment culture • Mid-1990s – Impact Assessment & Evaluation Group • 2000 – Systemwide workshop on impact assessment • 2004 – Impact assessment culture and impact assessment studies monitored in new performance monitoring system
Shifting the impact assessment culture – the cynical view Before • Impact assessment = Ex ante return on investment After • Impact assessment = Ex post return on investment
Concerns about IA in the IARCs • Focus on demonstrating impact instead of questioning effectiveness • Over-selection of successful cases • Lack of attention to negative consequences of IARC research • Tendency to attribute all benefits to center activities (limited focus on context and interactions) • Disciplinary barriers to non-economic approaches to evaluation
General challenges in assessing impact of research initiatives • Dependence on intermediaries to achieve long-term outcomes • The years that can pass between a research product and its impact • Risk inherent in research
Challenges to impact assessment in the IARCs • Multiple levels of evaluation • Project • Program • Center • System • Lack of coordination across levels • Multiple external demands for evaluation • Limited evaluation expertise in research centers
How do the IARCs evaluate impact? • Research efficiency (cost/benefit) • Evidence of use/adoption of technologies • Case studies • External review panels • Performance monitoring • New methods (e.g., narratives)
External reviews of the IARCs • Panels of five or fewer internationally-respected scientists • Asked to assess: • Consistency of Center mission, strategies and priorities with CGIAR’s mission and priorities • Relevance and quality of science • Governance and management • Accomplishments and impact of Center research
External review issues • Length of review process ( 2 years) • Length of time between reviews ( 5-7 years) • Limited pool of truly external researchers • Lack of coordination with Center-commissioned studies • Lack of explicit criteria inconsistent application of criteria by different panels
Goals for performance monitoring by the IARC system (CGIAR) • Links planning and evaluation • Provides annual data • Focuses on outputs and outcomes • Feeds into external reviews
PM system indicators • Potential to Perform indicators: • Quality and Relevance of Research • Institutional Health • Financial Health • Results indicators: • Outputs • Outcomes • Impacts
PM indicator: Institutionalization of impact assessment • Nature of the portfolio of IA studies • Innovation and advancement of IA methods and processes • Communication/dissemination and capacity enhancement • Impact culture (internal feedback and learning)
PM indicator: Impact studies • 2 studies are submitted and ranked for: • Clear presentation • Reasonable and transparent assumptions • Reliable and representative data • Realistic counterfactual • Sound attribution of benefits to research • Distance down the impact pathway • External input
Performance monitoring issues • Across centers: • Inconsistent definitions in the plans that are the basis for the assessment of outputs and outcomes • Differing opportunities and standards across the multiple disciplines represented in different IARCs • In general • Addressing outcome & impact of research activities • Distrust of process
Addressing the challenges • Role of program theory (the “impact pathway”) • Establish the program, not individual projects, as the unit of analysis • Explicate role of intermediaries in uptake, adoption, and use • Explicate link between use of technology and change in the condition of end users • Focus data collection on the causal links
Addressing the challenges • Case studies -- “Modus operandi method” (Scriven) • Detailed analysis of the configuration of the chain of events • Use of “tracers” • Role of meta-analysis/evaluation synthesis • Include studies that show negative as well as positive outcomes • Plan portfolio of studies that can be synthesized
Factors Pushing Against Change • Decreased funding for research centers • Continued lack of coordination of expectations • Insufficient involvement of research centers in planning change • Tradition
Factors Pushing for Change • Donors’ demand for accountability • Coordination at the system level • Reinvigorated oversight/review body (Science Council) • Cultural shift toward acceptance of M&E • Institutional Learning & Change (ILAC) movement
Conclusion • Complex systems need multiple approaches to impact assessment • Impact assessment demands from multiple sources need to be coordinated • Strengths of traditional approaches need to be recognized • “Disputatious community of scholars” needs to be nurtured so that negative results are seen as opportunities to learn