1 / 28

Impact Evaluation for Real Time Decision Making

Arianna Legovini Head, Development Impact Evaluation Initiative (DIME) World Bank. Impact Evaluation for Real Time Decision Making. FIRST What is a results chain?. Example of a results chain. ?. The results chain. Developmental hypothesis that helps you define:

quade
Download Presentation

Impact Evaluation for Real Time Decision Making

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Arianna Legovini Head, Development Impact Evaluation Initiative (DIME) World Bank Impact Evaluation for Real Time Decision Making

  2. FIRST What is a results chain?

  3. Example of a results chain ?

  4. The results chain • Developmental hypothesis that helps you define: • what you are doing and for what purpose • what needs to be monitored and • what needs to be evaluated

  5. What is monitoring? • Monitoring tracks indicators over time (in the treatment group) • It is descriptive before-after analysis • It tells us whether things are moving in the right direction

  6. What is Impact Evaluation? • Impact evaluation tracks outcomes over time in the treatment group relative to a control group • It measures the effect of an intervention on outcomes relative to a counterfactual • what would have happened without it? • It identifies the causal output-outcome link • separately from the effect of other time-varying factors

  7. DEMAND RESPONSE Monitoring & Impact Evaluation • Use impact evaluation to measure effectiveness (output-outcome) • Use monitoring to track implementation efficiency (input-output) MONITOR EFFICIENCY INPUTS OUTPUTS OUTCOMES EVALUATE EFFECTIVENESS $UPPLY

  8. Pick the right method to answer your questions

  9. Discuss among yourselves (5m) • What happens if you use monitoring to evaluate impact?

  10. Discuss among yourselves (5m) • What happens if you use monitoring to evaluate impact? • You get the wrong answer… …100% of the times Before After A C Impact B B Change t0 t1 Intervention

  11. NEXT: Do we know ex ante… • On Community Driven Development, what • information will get communities to respond? • facilitation will results in high quality proposals? • rules will increase inclusion in the decision-making process? • monitoring mechanisms and co-payments will improve local projects and their use of funds? • On Disarmament, Demobilization and Reintegration, • are community based or targeted approaches most effective? • should we try to delink combatants from units or build on unit cohesion? • is including or excluding command structures most effective?

  12. Trial and error • We turn to our best judgment for guidance and pick an information campaign, a package of services, a structure of incentives. • Is there any other campaign, package, incentive structure that will do better?

  13. The decision process is complex • A few big decisions are taken during design but many more decisions are taken during roll out & implementation

  14. Pick up the ball:What is a results tree? • A results tree is a representation of the set of results chains that are considered viable during program design or program restructuring. • Is a set of competing policy and operational alternatives to reach a specific objective.

  15. Example of a decision tree for a combatant reintegration program

  16. How to select between plausible alternatives? • Establish which decisions will be taken upfront and which will be tested during roll-out • Experimentally test critical nodes: measure the impact of one option relative to another or to no intervention • Pick better and discard worse during implementation • Cannot learn everything at once • Select carefully what you want to test by involving all relevant partners

  17. Walk along the decision tree to get more results out of a reintegration program

  18. H0w IE can support you

  19. Discuss among yourselves (5m) • How many times do you make changes to your program on the feeling that something is not working right? • How useful would it be to know for sure which ways are best?

  20. Why evaluate? • Improve quality of programs • Test alternatives and inform design in real time • Increase program effectiveness • Answer the “so what” questions • Build government institutions for evidence-based policy-making • Plan for implementation of options not solutions • Find out what alternatives work best • Adopt better way of doing business and taking decisions

  21. PM/Presidency: Communicate to constituencies Treasury/ Finance: Allocate budget CAMPAIGN PROMISES BUDGET Accountability Cost-effectiveness of different programs Effects of government program SERVICE DELIVERY Line ministries: Deliver programs and negotiate budget Cost-effectiveness of alternatives and effect of sector programs The market for evidence

  22. Shifting Program Paradigm From: • Program is a set of activities designed to deliver expected results • Program will either deliver or not To: • Program is menu of alternatives with a learning strategy to find out which work best • Change programs overtime to deliver more results

  23. Shifting Evaluation Paradigm • From retrospective, external, independent evaluation • Top down • Determine whether program worked or not • To prospective, internal, and operationally driven impact evaluation /externally validated • Set program learning agenda bottom up • Consider plausible implementation alternatives • Test scientifically and adopt best • Just-in-time advice to improve effectiveness of program over time

  24. Retrospective (designed & evaluated ex-post) vs. Prospective (designed ex-ante and evaluated ex-post) • Retrospective impact evaluation: • Collecting data after the event you don’t know how participants and nonparticipants compared before the program started • Have to try and disentangle why the project was implemented where and when it was, after the event • Prospective impact evaluation: • design the evaluation to answer the question you need to answer • collect the data you will need later • Ensure analytical validity 24

  25. Is this a one shot analytical product? • Evaluative process to provide useful (actionable) information at each step of the impact evaluation

  26. Discuss among yourselves (5m) • What are some of the entry points in policy-making cycles? • What are some of the answers you would like to have?

  27. Ethical considerations • It is not ethical to deny benefits to something that is available and we know works • HIV medicine proven to prolong life • It is ethical to test interventions before scale up if we don’t know if it works and whether it has unforeseen consequences • Food aid may destroy local markets create perverse incentives • Most times we use opportunities created by roll out and budget constraints to evaluate so as to minimize ethical considerations AND • We can always and should test alternatives to maximize out results

  28. Questions? Comments?

More Related