1 / 73

An introduction to Impact Evaluation

An introduction to Impact Evaluation. Markus Goldstein Poverty Reduction Group The World Bank. My question is: Are we making an impact?. 2 parts. Impact evaluation methods Impact evaluation practicalities: IE and the project cycle Use rural project examples. Outline - methods.

Download Presentation

An introduction to Impact Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.


Presentation Transcript

  1. An introduction to Impact Evaluation Markus Goldstein Poverty Reduction Group The World Bank

  2. My question is: Are we making an impact?

  3. 2 parts • Impact evaluation methods • Impact evaluation practicalities: IE and the project cycle • Use rural project examples

  4. Outline - methods • Monitoring and impact evaluation • Why do impact evaluation • Why we need a comparison group • Methods for constructing the comparison group • When to do an impact evaluation

  5. Monitoring and IE

  6. Monitoring and IE IMPACTS Program impacts confounded by local, national, global effects difficulty of showing causality OUTCOMES Users meet service delivery OUTPUTS Gov’t/program production function INPUTS

  7. Impact evaluation • Many names (e.g. Rossi et al call this impact assessment) so need to know the concept. • Impact is the difference between outcomes with the program and without it • The goal of impact evaluation is to measure this difference in a way that can attribute the difference to the program, and only the program

  8. Why it matters • We want to know if the program had an impact and the average size of that impact • Understand if policies work • Justification for program (big $$) • Scale up or not – did it work? • Meta-analyses – learning from others • (with cost data) understand the net benefits of the program • Understand the distribution of gains and losses

  9. What we need  The difference in outcomes with the program versus without the program – for the same unit of analysis (e.g. individual) • Problem: individuals only have one existence • Hence, we have a problem of a missing counter-factual, a problem of missing data

  10. Thinking about the counterfactual • Why not compare individuals before and after (the reflexive)? • The rest of the world moves on and you are not sure what was caused by the program and what by the rest of the world • We need a control/comparison group that will allow us to attribute any change in the “treatment” group to the program (causality)

  11. comparison group issues • Two central problems: • Programs are targeted  Program areas will differ in observable and unobservable ways precisely because the program intended this • Individual participation is (usually) voluntary • Participants will differ from non-participants in observable and unobservable ways • Hence, a comparison of participants and an arbitrary group of non-participants can lead to heavily biased results

  12. Example: providing fertilizer to farmers • The intervention: provide fertilizer to farmers in a poor region of a country (call it region A) • Program targets poor areas • Farmers have to enroll at the local extension office to receive the fertilizer • Starts in 2002, ends in 2004, we have data on yields for farmers in the poor region and another region (region B) for both years • We observe that the farmers we provide fertilizer to have a decrease in yields from 2002 to 2004

  13. Did the program not work? • Further study reveals there was a national drought, and everyone’s yields went down (failure of the reflexive comparison) • We compare the farmers in the program region to those in another region. We find that our “treatment” farmers have a larger decline than those in region B. Did the program have a negative impact? • Not necessarily (program placement) • Farmers in region B have better quality soil (unobservable) • Farmers in the other region have more irrigation, which is key in this drought year (observable)

  14. OK, so let’s compare the farmers in region A • We compare “treatment” farmers with their neighbors. We think the soil is roughly the same. • Let’s say we observe that treatment farmers’ yields decline by less than comparison farmers. Did the program work? • Not necessarily. Farmers who went to register with the program may have more ability, and thus could manage the drought better than their neighbors, but the fertilizer was irrelevant. (individual unobservables) • Let’s say we observe no difference between the two groups. Did the program not work? • Not necessarily. What little rain there was caused the fertilizer to run off onto the neighbors’ fields. (spillover/contamination)

  15. The comparison group • In the end, with these naïve comparisons, we cannot tell if the program had an impact  We need a comparison group that is as identical in observable and unobservable dimensions as possible, to those receiving the program, and a comparison group that will not receive spillover benefits.

  16. How to construct a comparison group – building the counterfactual • Randomization • Matching • Difference-in-Difference • Instrumental variables • Regression discontinuity

  17. 1. Randomization • Individuals/communities/firms are randomly assigned into participation • Counterfactual: randomized-out group • Advantages: • Often addressed to as the “gold standard”: by design: selection bias is zero on average and mean impact is revealed • Perceived as a fair process of allocation with limited resources • Disadvantages: • Ethical issues, political constraints • Internal validity (exogeneity): people might not comply with the assignment (selective non-compliance) • Unable to estimate entry effect • External validity (generalizability): usually run controlled experiment on a pilot, small scale. Difficult to extrapolate the results to a larger population.

  18. Randomization in our example… • Simple answer: randomize farmers within a community to receive fertilizer... • Potential problems? • Run-off (contamination) so control for this • Take-up (what question are we answering)

  19. 2. Matching • Match participants with non-participants from a larger survey • Counterfactual: matched comparison group • Each program participant is paired with one or more non-participant that are similar based on observable characteristics • Assumes that, conditional on the set of observables, there is no selection bias based on unobserved heterogeneity • When the set of variables to match is large, often match on a summary statistics: the probability of participation as a function of the observables (the propensity score)

  20. 2. Matching • Advantages: • Does not require randomization, nor baseline (pre-intervention data) • Disadvantages: • Strong identification assumptions • Requires very good quality data: need to control for all factors that influence program placement • Requires significantly large sample size to generate comparison group

  21. Matching in our example… • Using statistical techniques, we match a group of non-participants with participants using variables like gender, household size, education, experience, land size (rainfall to control for drought), irrigation (as many observable charachteristics not affected by fertilizer)

  22. Matching in our example…2 scenarios • Scenario 1: We show up afterwards, we can only match (within region) those who got fertilizer with those who did not. Problem? • Problem: select on expected gains and/or ability (unobservable) • Scenario 2: The program is allocated based on historical crop choice and land size. We show up afterwards and match those eligible in region A with those in region B. Problem? • Problems: same issues of individual unobservables, but lessened because we compare eligible to potential eligible • now unobservables across regions

  23. An extension of matching:pipeline comparisons • Idea: compare those just about to get an intervention with those getting it now • Assumption: the stopping point of the intervention does not separate two fundamentally different populations • example: extending irrigation networks

  24. 3. Difference-in-difference • Observations over time: compare observed changes in the outcomes for a sample of participants and non-participants • Identification assumption: the selection bias is time-invariant (‘parallel trends’ in the absence of the program) • Counter-factual: changes over time for the non-participants Constraint: Requires at least two cross-sections of data, pre-program and post-program on participants and non-participants • Need to think about the evaluation ex-ante, before the program • Can be in principle combined with matching to adjust for pre-treatment differences that affect the growth rate

  25. Implementing differences in differences in our example… • Some arbitrary comparison group • Matched diff in diff • Randomized diff in diff • These are in order of more problems  less problems, think about this as we look at this graphically

  26. As long as the bias is additive and time-invariant, diff-in-diff will work ….

  27. What if the observed changes over time are affected?

  28. 4. Instrumental Variables • Identify variables that affects participation in the program, but not outcomes conditional on participation (exclusion restriction) • Counterfactual: The causal effect is identified out of the exogenous variation of the instrument • Advantages: • Does not require the exogeneity assumption of matching • Disadvantages: • The estimated effect is local: IV identifies the effect of the program only for the sub-population of those induced to take-up the program by the instrument • Therefore different instruments identify different parameters. End up with different magnitudes of the estimated effects • Validity of the instrument can be questioned, cannot be tested.

  29. IV in our example • It turns out that outreach was done randomly…so the time/intake of farmers into the program is essentially random. • We can use this as an instrument • Problems? • Is it really random? (roads, etc)

  30. 5.Regression discontinuity design • Exploit the rule generating assignment into a program given to individuals only above a given threshold – Assume that discontinuity in participation but not in counterfactual outcomes • Counterfactual: individuals just below the cut-off who did not participate • Advantages: • Identification built in the program design • Delivers marginal gains from the program around the eligibility cut-off point. Important for program expansion • Disadvantages: • Threshold has to be applied in practice, and individuals should not be able manipulate the score used in the program to become eligible.

  31. Example from Buddelmeyer and Skoufias, 2005

  32. RDD in our example… • Back to the eligibility criteria: land size and crop history • We use those right below the cut-off and compare them with those right above… • Problems: • How well enforced was the rule? • Can the rule be manipulated? • Local effect

  33. Discussion example:building a control group for irrigation • Scenario: we have a project to extend existing reaches and build some new canal • An initial analysis shows that farmers who are newly irrigated have increased yield…was the project a success? • What is the evaluation question? • What is a logical comparison group and method?

  34. Investment operation vs adjustment/budget support • Project • Maybe evaluate all, but unlikely • Pick subcomponents • Adjustment/budget support • Build a strong M&E unit • Impact evaluation designed by govt • Evaluate policy reform pilots • e.g. health insurance pilot, P4P, tariff changes • Anything economy wide ≠ impact evaluation

  35. Prioritizing for Impact Evaluation • It is not cheap – relative to monitoring • Possible prioritization criteria: • Don’t know if policy is effective • e.g. conditional cash transfers • Politics • e.g. Argentina workfare program • It’s a lot of money • Note that 2 & 3 are variants of not “knowing” – in this context, etc.

  36. Summing up:Methods • No clear “gold standard” in reality – do what works best in the context • Watch for unobservables, but don’t forget observables • Be flexible, be creative – use the context • IE requires good monitoring and monitoring will help you understand the effect size

  37. Impact Evaluation and the Project Cycle

  38. Objective of this part of the presentation • Walk you through what it takes to do an impact evaluation for your project from Identification to ICR • Persuade you that impact evaluation will add value to your project

  39. We will talk about… • General Principles • In the context of 3 project periods: • Evaluation activities – the core issues for evaluation design and implementation, and • Housekeeping activities—procedural, administrative and financial management issues • Where to go for assistance

  40. Some general principles • Government ownership as whole—what matters is institutional buy-in so that the results get used • Relevance and applicability—asking the right questions • Flexibility and adaptability • Horizon matters

  41. Ownership • IE can provide one avenue to build institutional capacity and a culture of managing-by-results – so the IE should be as widely owned within gov’t as possible • Agree on a dissemination plan to maximize use of results for policy development. • Identify entry points in project and policy cycles • midpoint and closing, for project; • sector reporting, CGs, MTEF, budget, for WB • Budget cycles, policy reviews for gov’t • Use partnerships with local academics to build local capacity for impact evaluation.

  42. Relevance and Applicability • For an evaluation to be relevant, it must be designed to respond to the policy questions that are of importance. • Clarifying early what it is that will be learned and designing the evaluation to that end will go some way to ensure that the recommendations of the evaluation will feed into policy making. • Make sure to to think about unintended consequences (e.g. export crop promotion shifts the intrahousehold allocation of power or S. Africa pensions) – qualitative and interdisciplinary perspectives are key here

  43. Flexibility and adaptability • The evaluation must be tailored to the specific project and adapted to the specific institutional context. • The project design must be flexible to secure our ability to learn in a structured manner, feed evaluation results back into the project and change the project mid-course to improve project end results. • Can be broad project redesign or push in new directions e.g. feed into nutritional targeting design This is an important point: In the past projects have been penalized for affecting mid-course changes in project design. Now we want to make change part of the project design.

  44. Horizon matters • The time it takes to achieve results is an important consideration for timing the evaluation. Conversely, the timing of the evaluation will determine what outcomes should be focused on. • Early evaluations should focus on outcomes that are quick to show change • For long-term outcomes, evaluations may need to span beyond project cycle. e.g. Indonesia school building project • Think through how things are expected to change over time and focus on what is within the time horizon for the evaluation Do not confuse the importance of an outcome with the time it takes for it to change—some important outcomes are obtained instantaneously ! But don’t be afraid to look at intermediate outcomes either

  45. Stage 1:Identification to PCN

  46. Get an Early Start How do you get started? • Get help and access to resources: contact person in your region or sector responsible for impact evaluation and/or Thematic Group on Impact Evaluation • Define the timing for the various steps of the evaluation to ensure you have enough lead time for preparatory activities (e.g. baseline goes to the field before program activities start) • The evaluation will require support from a range of policy-makers: start building and maintaining constituents, dialogue with relevant actors in government, build a broad base of support, include stakeholders

  47. Build the Team • Select impact evaluation team and define responsibilities of: • program managers (government), • WB project team, and other donors, • lead evaluator (impact evaluation specialist), • local research/evaluation team, and • data collection agency or firm Selection of lead evaluator is critical for ensuring quality of product, and so is the capacity of the data collection agency • Partner with local researchers and research institutes to build local capacity

  48. Shift Paradigm • From a project design based on “we know what’s best” • To project design based on the notion that “we can learn what’s best in this context, and adapt to new knowledge as needed” Work iteratively: • Discuss what the team knows and what it needs to learn–the questions for the evaluation—to deliver on project objectives • Discuss translating this into a feasible project design • Figure out what questions can feasibly be addressed • Housekeeping: Include these first thoughts in a paragraph in the PCN • e.g. ARV evaluation – funding constraints shifted radically, quickly – design changed, and changed again

  49. Stage 2:Preparation through appraisal

  50. Define project development objectives and results framework • This activity • clarifies the results chain (logic of impacts) for the project, • identifies the outcomes of interest and the indicators best suited to measure changes in those outcomes, and • the expected time horizon for changes in those outcomes. • This will provide the lead evaluator with the project specific variables that must be included in the survey questionnaire and a notion of timing for scheduling data collection.

More Related