1 / 16

Implementing an impact evaluation under constraints

Implementing an impact evaluation under constraints. Emanuela Galasso (DECRG) Prem Learning Week May 2 nd , 2006. Background: Defining the question. Back to terminology:

leland
Download Presentation

Implementing an impact evaluation under constraints

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implementing an impact evaluation under constraints Emanuela Galasso (DECRG) Prem Learning Week May 2nd, 2006

  2. Background: Defining the question Back to terminology: • Evaluation methods reviewed in the morning: main methods to estimate the counterfactual (what would have happened in the absence of the program). • Objective is to find out whether a program works or not (to do a cost-benefit analysis) • In a targeted program, we might are likely to be interested in the effect of the program on participants (average treatment effect, ex. Targeted cash transfer) or about the effect of the availability of the program (intention to treat, ex. Making a service available) • But you might be interested in other parameters (different modes of delivery, who gains and who loses) clarify the question first

  3. Background: Defining the question • When individuals self-select into programs (choice) or programs are purposively placed, a simple comparison between participants and non-participants will be biased (selection bias). • The evaluation design aims at obtaining a credible estimate of the counterfactual outcomes in any given context (problem of attribution, internal validity)

  4. Randomization as a tool • Powerful tool for assessing impact: • solves the internal validity problem (assignment random: all units have the same ex-ante chance of receiving the program) • Easy way to identify impact • Results can be easily communicated • Ideal for pilot programs (does a given intervention work or not?) • Recommended for programs where the role of unobservables is going (e.g. CCD which depend on demand from the community) • Issues of external validity • Small controlled experiment, pilot may not be the same as the program that gets implemented at scale

  5. Background: What if randomization is not available/feasible? • In order to design an evaluation, it is key to understand how the program was placed, assigned (the assignment mechanism) • Knowing well the features of program design and implementation provide important leads on how to design the evaluation • No single empirical method dominates. It depends on the program/context. Often combining methods helps

  6. Looking inside the “assignment mechanism”: • In designing an evaluation and deciding the most suitable method, think first about the design features of the program being studied, how the program is assigned • Look for sources of exogenous variation: • Eligibility: at the household, community, geographic level. • Geographic phase-in (eligibility and timing) • Possible thresholds of eligibility used for participation

  7. Time constraints • Plan for the evaluation as early as possible • … Still, program might have to start without allowing the time for a baseline survey • Not strictly necessary for a randomized design • Desirable for non-experimental settings (helps clean out time invariant unobservable differences) • Can still exploit the fact that program does not start at once (phase in) • The project cycle ends before one can expect to measure the impact of the program • Think in advance about intermediateindicators (channels of intervention)

  8. Exploit geographic phase-in over time • Use sequential phase-in of the program over time across areas (‘pipeline comparisons’): • Randomized phase-in: PROGRESA (rural component CCT): 1/3 of the eligible communities delayed entry for 18 months, 2/3 receive the program, followed over time • Exploit the expansion along a given route for infrastructure project (water, transport, communication network): compare among the eligible participants at different sides of the project frontier

  9. Exploit phase-in over time • Use sequential phase-in of the program over time across individuals • ‘pipeline’ comparison among participants and applicants (Jefes program in Argentina) • Program expanding fast, compare participants to those applied but that had not received it yet • Caveat about the assumptions on the timing of treatment

  10. Exploit exogenous variation around eligibility • Regression discontinuity design. • Can draw from a large existing survey.. • … or design a carefully selected sample (sample/Oversample people/communities just below/above the threshold (near eligible) • Non-experimental: exogenous variation of eligibility to the program across different geographic areas (Chile Solidario) • RD results replicate closely the experimental results in Progresa

  11. Using existing data (1) • Ex. Expansion of a large cash transfer (old age pension) in South Africa (Duflo 2003) • Initially means-tested, became universal program in 1990s. By 1993 fully operational in all areas • Question: effect of the pension on child health • Evaluation design: use individual eligibility • Used the 1993 existing LSMS: key is to have participation questions in the questionnaire • Pension receipt exhibits a discontinuity at age 60 for women and age 65 for men. Use eligibility as instrument for receipt

  12. Piggybacking on a planned survey • Lesson from SA: it pays to have indicators of program participation in a national household survey. Plan ahead to include questions in a forthcoming survey • Possibly oversampling individual participants or participating areas if needed (more later on sampling)

  13. Using existing data (2) • Even in the absence of a participation question, with a large sample, high quality survey (Trabajar, Argentina): • Administer same questionnaire to participants drawn from the administrative data • Use large survey to draw matched non-participant sample • Key is the quality of the data: could proxy for a large set of covariates, including local connections

  14. Budget constraints: How large should a sample be? • Important: do not necessarily need a nationally representative household survey but a carefully designed sample • build in the sample only comparable participants and non-participants • Power calculations

  15. Budget constraints: How large should a sample be? • 2 non-experimental examples: • Chile Solidario (geographic variation in eligibility): use participants and comparable non-participants and follow only a subset over time • Oportunidades in Mexico (urban component of Progresa) • Sample non-intervention communities/blocks that look as similar as possible to the participants ones based on census data

  16. Budget constraints: Sub-groups? • What are the key dimensions along which one would like to disaggregate the analysis? • Rural/urban • Rich/poor • Levels of education… • For each comparison one needs a sufficient sample size in each subgroup so that one can test differences in the estimated effect across subgroups.

More Related