1 / 38

Methodological issues in assessing the impact of collaborative agricultural R&D

Methodological issues in assessing the impact of collaborative agricultural R&D. Patricia Rogers, Royal Melbourne Institute of Technology Jamie Watts, Institutional Learning and Change Initiative International Workshop on

lieu
Download Presentation

Methodological issues in assessing the impact of collaborative agricultural R&D

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Methodological issues in assessing the impact of collaborative agricultural R&D Patricia Rogers, Royal Melbourne Institute of Technology Jamie Watts, Institutional Learning and Change Initiative International Workshop on Methodological Innovations in Impact Assessment of Agricultural Research Brasilia, Brazil November 12, 2008

  2. Overview • Rationale for collaborative agricultural research and development • Different aspects of interventions – simple, complicated, complex • Tasks in impact evaluation – deciding impacts, measuring/describing impacts, causal inference, using results • Particular issues and options for impact evaluation of complicated and complex interventions

  3. 1. Rationale for collaborative agricultural research Nature June 2008 Special issue on translational research

  4. Nature June 2008 Special issue on translational research [Nobel laureate Sydney] Brenner is one of many scientists challenging the idea that translational research is just about carrying results from bench to bedside, arguing that the importance of reversing that polarity has been overlooked. “I’m advocating it go the other way,” Brenner said.

  5. Visualising the connection between laboratory research and practice research Tabak, 2005 National Institute of Dental and Crano-Facial Research, National Institutes of Health

  6. Collaborative agricultural research Some reasons to engage intended end-users in agricultural research and development: • Increase researchers’ understanding of local issues • Improve the relevance of research to local conditions • Incorporate local knowledge into research • More effectively reach women and the poor • Increase uptake and appropriate adaptation

  7. 2. Different aspects of interventionswhich may need different impact evaluation methods Simple aspects that can be tightly specified and standardized and that work the same in all places Complicated aspects that are part of a larger multi-component impact pathway Complex aspects that are highly adaptive, responsive and emergent

  8. Formulae are critical and necessary Sending one rocket increases assurance that next will be ok High level of expertise in many specialized fields + coordination Rockets similar in critical ways High degree of certainty of outcome Formulae have only a limited application Raising one child gives no assurance of success with the next Expertise can help but is not sufficient; relationships are key Every child is unique Uncertainty of outcome remains Following a RecipeA Rocket to the MoonRaising a Child Simple Complicated Complex • The recipe is essential • Recipes are tested to assure replicability of later efforts • No particular expertise; knowing how to cook increases success • Recipes produce standard products • Certainty of same results every time (Diagram from Zimmerman 2003)

  9. 3. Tasks in impact assessment DECIDE impacts to be included in assessment B)MEASURE or describe impacts C)ANALYSE causal contribution of intervention and other factors D)SUPPORT USE Each of these tasks requires appropriate methods and involves values and evidence

  10. Examples of increasing attention to impact assessment/evaluation in international development • Center for Global Development producers of ‘When Will We Ever Learn?’ report (WWWEL) that argued for more use of RCTs (Randomised Control Trials) • NONIE –Network Of Networks on Impact Evaluation all UN agencies, all multilateral development banks and all international aid agencies of OECD countries supporting better quality impact evaluation, including sharing information and producing Guidelines for Impact Evaluation • 3IE – the International Initiative on Impact Evaluation new organisation funding and promoting rigorous impact evaluation • Poverty Action Lab Stated purpose is to advocate for the wider use of RCTs • European Evaluation Society formal statement cautioning against inappropriate use of RCTs

  11. What is impact? …the positive and negative, primary and secondary long-term effects produced by a developmentintervention, directly or indirectly, intended or unintended. These effects can be economic, socio-cultural, institutional, environmental, technological or of other types. DAC definition

  12. A. Decide impacts to include Need to: Include different dimensions – eg not just income but livelihoods Include the sustainability of these impacts, including environmental sustainability Not only focus on stated objectives – also unintended outcomes (positive and negative) Recognise the values of different stakeholders in terms of Desirable and undesirable impacts Desirable and undesirable processes to achieve these impacts Desirable and undesirable distribution of benefits Identify the ways in which these impacts are understood to occur and what else needs to be included in the analysis

  13. Deciding on impacts to include in impact assessment of collaborative R&D • Collaborative R&D will likely create expectations of collaborative evaluation approaches (including deciding on impacts) • Power or capacity imbalances among collaborators should be leveled out to encourage active participation

  14. Decide impacts to include. Some approaches: Program theory (impact pathway) - possibly developing multiple models of the program, eg Soft Systems, negotiate boundaries (eg Critical Systems Heuristics) Participatory approaches to values clarification –eg Most Significant Change

  15. B. Gather evidence of impacts Need to: Balance credibility (especially comprehensiveness) and feasibility (especially timeliness and cost) Prioritise which impacts (and other variables) will be studied empirically and to what extent Deal with time lags before impacts are evident Avoid accidental or systematic distortion of level of impacts

  16. Gather evidence of impacts Some approaches: Program theory (impact pathway) – identify short-term results that can indicate longer-term impacts Participatory approaches – engaging community in evidence gathering to increase reach and engagement Real world evaluation – mixed methods, triangulation, making maximum use of existing data, strategic sampling, rapid data collection methods

  17. Analyse causal contribution or attribution Need to: Avoid false negatives (erroneously thinking it doesn’t work) and false positives (erroneously thinking it does work) Systematically search for disconfirming evidence and analysis of exceptions Distinguish between theory failure and implementation failure Understand the contribution of context: implementation environment, participant characteristics and other interventions

  18. Intervention is both necessary and sufficient to produce the impact ‘Silver bullet’ simple impacts Impact No impact Intervention No intervention

  19. Non-linear effects An intervention might: • Have positive impacts at some levels and negative at others (more is not always better) • Have effects only at certain thresholds

  20. Example of non-linear effects

  21. Causal packages An intervention might be: • Not necessary – other pathways might lead to the same outcome (but • Not sufficient – other factors might need to be in place (including favourable context – implementation environment or participant characteristics) Therefore differential impacts must be examined not as an optional extra but as an integral part of analysis.

  22. Causal packages ‘Jigsaw’ complicated impacts Intervention Favourable context Impacts

  23. Intervention is necessary but not sufficient to produce the impact ‘Jigsaw’ complicated impacts Impact No impact Intervention Favourable context Intervention Unfavourable context

  24. Example of causal package FINDING: If two potted plants are randomly assigned to either a treatment group that receives daily water, or to a control that receives none, and both groups are placed in a dark cupboard, the treatment group does not have better outcomes than the control. CONCLUSION: Watering plants is ineffective in making them grow.

  25. Limitations of RCTs for “jigsaws” When schools in Kenya were randomly assigned to either a treatment group that received flip chart teaching aids, or to a control that received none, the treatment group did not have better outcomes than the control. CONCLUSION: Flip charts are ineffective. Glewe et al (2004) Retrospective vs. prospective analyses of school inputs: the case of flip charts in Kenya Journal of Development Economics, 2004, vol. 74, issue 1, pages 251-268

  26. Better ways of building evidence about “jigsaws” British fatality rate corrected for miles driven and with seasonal variations removed. (Source: Ross, Campbell & Glass, 1970, in Glass, 1997)

  27. Change in rate of road fatalities on Fri and Sat nights Data for Fri night/Sat am and Sat night/Sun am (Source: Ross, Campbell & Glass, 1970, in Glass, 1997)

  28. Intervention is sufficient but not necessary to produce the impact ‘Parallel’ complicated impacts Impact Impact Intervention No intervention Alternative activity

  29. Limitations of RCTs when multiple paths exist ‘A US program to assist poor families through social service visits found that families receiving the program experienced improvements in their well-being —but so did the families that were randomly assigned to a control group that did not receive the visits (St. Pierre and Layzer 1999). “[As this case shows], a good study helps avoid spending funds on ineffective programs and redirects attention to improving designs or to more promising alternatives.” (Center for Global Development, When Will We Ever Learn?) BUT IS THIS A VALID CONCLUSION FROM THE STUDY?

  30. Limitations of RCTs when multiple paths exist • Many control group families were able to obtain services on their own (‘contamination’) – • (Information contained in the 1999 report of the evaluation but not in the WWWEL report) • Therefore lack of difference in the outcomes between treatment and control group does not mean the program was ineffective but that there was an alternative path to the outcome. • Good evaluation would compare these alternatives.

  31. Analyse causal contribution or attribution Some approaches: Addressing through design eg experimental designs (random assignment) and quasi-experimental designs (construction of comparison group eg propensity scores) Addressing through data collection eg participatory Beneficiary Assessment, expert judgement Addressing through iterative analysis and collection eg Contribution Analysis, Multiple Levels and Lines of Evidence (MLLE), List of Possible Causes (LOPC) and General Elimination Methodology (GEM), systematic qualitative data analysis, realist analysis of testable hypotheses

  32. D. Report synthesis and support use Need to: Provide useful information to intended users Provide a synthesis that summarised evidence and values Balance overall pattern and detail Assist uptake/translation of evidence

  33. Developmental Evaluation: Emerging Approach for complexity MQ Patton May, 2008 Evaluation processes support program, product, staff and/or organizational development The evaluator is part of a team whose members collaborate to conceptualize, design and test new approaches in a long-term, on-going process of continuous improvement, adaptation and intentional change. The evaluator's primary function in the team is to elucidate team discussions with evaluative questions, data and logic, and facilitate data-based decision-making in the developmental process. Capacity to learn might be more relevant than specific results

  34. Report synthesis and support use Some approaches: Use focus -Utilization-focused evaluation - Identification and involvement of intended users from the start Synthesis - Qualitative Weight and Sum and other techniques to determine overall worth Reporting - Layered reports (1 page, 5 pages, 25 pages); Scenarios showing different outcomes in different contexts; Workshopping report to support knowledge translation Developmental Evaluation Translational Research…an emerging approach

More Related