1 / 94

Quality Impact Evaluation: an introductory workshop

Quality Impact Evaluation: an introductory workshop. Howard White International Initiative for Impact Evaluation. PART I INTRODUCTION TO IMPACT EVALUATION. What is impact and why does it matter?. Write down a definition of impact evaluation

ravi
Download Presentation

Quality Impact Evaluation: an introductory workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quality Impact Evaluation: an introductory workshop Howard White International Initiative for Impact Evaluation

  2. PART I INTRODUCTION TO IMPACT EVALUATION

  3. What is impact and why does it matter? • Write down a definition of impact evaluation • Impact = the (outcome) indicator with the intervention compared to what it would have been in the absence of the intervention • Impact evaluation is the only meaningful way to measure results

  4. Why results? • Results agenda from early 1990s • USAID experience • Move away from outcome monitoring to impact evaluation to evidence based development

  5. What is evidence-based development? Allocating resources to programs designed and implemented on the basis of evidence of what works and what doesn’t

  6. Why did the Bangladesh Integrated Nutrition Program (BINP) fail? Why did the Bangladesh Integrated Nutrition Project (BINP) fail?

  7. Comparison of impact estimates

  8. Summary of theory

  9. The theory of change Right target group for nutritional counselling

  10. The theory of change Knowledge acquired and used

  11. The theory of change The right children are enrolled in the programme

  12. The theory of change Supplementary feeding is supplementary

  13. Participation rates

  14. The attribution problem:factual and counterfactual Impact varies over time Impact varies over time

  15. … and is it sustainable?

  16. What has been the impact of the French revolution? “It is too early to say” Zhou Enlai

  17. So where does the counterfactual come from? • Most usual is to use a comparison group of similar people / households / schools / firms…

  18. What do we need to measure impact? Girl’s secondary enrolment The majority of evaluations have just this information … which means we can say absolutely nothing about impact

  19. Before versus after single difference comparisonBefore versus after = 92 – 40 = 52 “scholarships have led to rising schooling of young girls in the project villages” This ‘before versus after’ approach is outcome monitoring, which has become popular recently. Outcome monitoring has its place, but it is not impact evaluation

  20. Rates of completion of elementary male and female students in all rural China’s poor areas Share of rural children 1993 1993 2008 2008

  21. Post-treatment comparison comparisonSingle difference = 92 – 84 = 8 But we don’t know if they were similar before… though there are ways of doing this (statistical matching = quasi-experimental approaches)

  22. Double difference =(92-40)-(84-26) = 52-58 = -6 Conclusion: Longitudinal (panel) data, with a comparison group, allow for the strongest impact evaluation design (though still need matching). SO WE NEED BASELINE DATA FROM PROJECT AND COMPARISON AREAS

  23. Exercise • What is the objective of your intervention? • Define up to three main outcome indicators for your intervention • Using hypothetical outcome data for one indicator write down the before/after, comparison/treatment matrix calculate the • Ex-post single difference • Before versus after (single difference) • Double difference impact estimates

  24. Small n versus large n evaluation designs

  25. Main points so far • Analysis of impact implies a counterfactual comparison • Outcome monitoring is a factual analysis, and so cannot tell us about impact • The counterfactual is most commonly determined by using a comparison group If you are going to do impact evaluation you need a credible counterfactual using a comparison group - VERY PREFERABLY WITH BASELINE DATA

  26. However…. • This is for ‘large n’ interventions • There are a large number of units of intervention, e.g. children, households, firms, schools. • Examples of small n are policy reform and many (but not all) capacity building projects. • Some reforms (e.g. health insurance) can be given large n designs • ‘Small n’ interventions require either • Modelling (computable general equilibrium, CGE, models), e.g. trade and fiscal policy • Qualitative approaches, e.g. the impact of impact assessments • A theory-based large n study may have elements of small n analysis at some stages of the causal chain

  27. Thoughts on small n • Identify theory of change reflecting multiple players and channels of influence • Stakeholder mapping • Avoiding leading questions • Looking for footprints

  28. Example: channels for donor influence

  29. Exercise • Which elements of your intervention are amenable to a large n impact evaluation design, and which a small n design? • Are there any bits left?

  30. Problems in implementing rigorous impact evaluation: selecting a comparison group • Contagion: other interventions • Spill over effects: comparison affected by intervention • Selection bias: beneficiaries are different • Ethical and political considerations

  31. The problem of selection bias • Program participants are not chosen at random, but selected through • Program placement • Self selection • This is a problem if the correlates of selection are also correlated with the outcomes of interest, since those participating would do better (or worse) than others regardless of the intervention

  32. Selection bias from program placement • A program of school improvements is targeted at the poorest schools • Since these schools are in poorer areas it is likely that students have home and parental characteristics are associated with lower learning outcomes (e.g. illiteracy, no electricity, child labour) • Hence learning outcomes in project schools will be lower than the average for other schools • The comparison group has to be drawn from a group of schools in similarly deprived areas

  33. Selection bias from self-selection • A community fund is available for community-identified projects • An intended outcome is to build social capital for future community development activities • But those communities with higher degrees of cohesion and social organization (i.e. social capital) are more likely to be able to make proposals for financing • Hence social capital is higher amongst beneficiary communities than non-beneficiaries regardless of the intervention, so a comparison between these two groups will overstate program impact

  34. Examples of selection bias • Hospital delivery in Bangladesh (0.115 vs 0.067) • Secondary education and teenage pregnancy in Zambia • Male circumcision and HIV/AIDS in Africa

  35. HIV/AIDs and circumcision: geographical overlay

  36. Main point There is ‘selection’ in who benefits from nearly all interventions. So need to get a comparison group which has the same characteristics as those selected for the intervention.

  37. Discussion • Is selection bias likely for your intervention? Why and how will it affect the attempt to measure impact?

  38. Dealing with selection bias Need to use experimental or quasi-experimental methods to cope with this; this is what has been meant by rigorous impact evaluation Experimental (randomized control trials = RCTs, commonly used in agricultural research and medical trials, but are more widely applicable) Quasi-experimental Propensity score matching Regression discontinuity Pipeline approach Regressions (including instrumental variables)

  39. Randomization (RCTs) • Randomization addresses the problem of selection bias by the random allocation of the treatment • Randomization may not be at the same level as the unit of intervention • Randomize across schools but measure individual learning outcomes • Randomize across sub-districts but measure village-level outcomes • The less units over which you randomize the higher your standard errors • But you need to randomize across a ‘reasonable number’ of units • At least 30 for simple randomized design (though possible imbalance considered a problem for n < 200) • Can be as few as 10 for matched pair randomization

  40. Issues in randomization • Randomize across eligible population not whole population • Can randomize across the pipeline • Is no less unethical than any other method with a control group (perhaps more ethical), and any intervention which is not immediately universal in coverage has an untreated population to act as a potential control group • No more costly than other survey-based approaches

  41. Conducting an RCT • Has to be an ex-ante design • Has to be politically feasible, and confidence that program managers will maintain integrity of the design • Perform power calculation to determine sample size (and therefore cost) • Adopt strict randomization protocol • Maintain information on how randomization done, refusals and ‘cross-overs’ • A, B and A+B designs (factorial designs) • Collect baseline data to: • Test quality of the match • Conduct difference in difference analysis

  42. Exercise • Is any element of your intervention (or indirectly supported activities) amenable to randomization? • What are the unit of assignment and the unit of analysis? • Over how many units will you assign the treatment? • What is the treatment? What is the control?

  43. Quasi-experimental approaches • Possible methods • Propensity score matching • Regression discontinuity • Instrumental variables • Advantage: can be done ex post, and when random assignment not possible • Disadvantage: cannot be assured of absence of selection bias

  44. Propensity score matching • Need someone with all the same age, education, religion etc. • But, matching on a single number calculated as a weighted average of these characteristics gives the same result and matching individually on every characteristic – this is the basis of propensity score matching • The weights are given by the ‘participation equation’, that is a probit equation of whether a person participates in the project or not

  45. Propensity score matching:what you need • Can be based on ex post single difference, though double difference is better • Need common survey for treatment and potential comparison, or survey with common sections for matching variables and outcomes

  46. Propensity score matching:example of matching: water supply in Nepal

  47. Regression discontinuity: an example – agricultural input supply program

  48. Naïve impact estimates • Total = income(treatment) – income(comparison) = 9.6 • Agricultural h/h only = 7.7 • But there is a clear link between net income and land holdings • And it turns out that the program targeted those households with at least 1.5 ha of land (you can see this in graph) • So selection bias is a real issue, as the treatment group would have been better off in absence of program, so single difference estimate is upward bias

  49. Regression discontinuity • Where there is a ‘threshold allocation rule’ for program participation, then we can estimate impact by comparing outcomes for those just above and below the threshold (as these groups are very similar) • We can do that by estimating a regression with a dummy for the threshold value (and possibly also a slope dummy) – see graph • In our case the impact estimate is 4.5, which is much less than that from the naïve estimates (less than half)

  50. Exercise • What design would you use to establish the impact of your intervention on the outcomes of interest?

More Related