1 / 38

Impact Evaluation Methods Randomization and Causal Inference

Impact Evaluation Methods Randomization and Causal Inference. Slides by Paul J. Gertler & Sebastian Martinez. Motivation. “Traditional” M&E: Is the program being implemented as designed? Could the operations be more efficient? Are the benefits getting to those intended? Monitoring trends

scaruso
Download Presentation

Impact Evaluation Methods Randomization and Causal Inference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact Evaluation MethodsRandomization and Causal Inference Slides by Paul J. Gertler & Sebastian Martinez

  2. Motivation • “Traditional” M&E: • Is the program being implemented as designed? • Could the operations be more efficient? • Are the benefits getting to those intended? • Monitoring trends • Are indicators moving in the right direction? •  NO inherent Causality • Impact Evaluation: • What was the effect of the program on outcomes? • Because of the program, are people better off? • What would happen if we changed the program? •  Causality

  3. Policy Intervention Monitoring Impact Evaluation Increase Access and Quality in Early Child Education • Construction • Feeding • Quality -New classrooms -SES of students • # of Meals • Use of curriculum -Increased attendance • health/growth • Cognitive Development Improve learning in Science and Math in high school • Upgrade science laboratories • Training of instructors - # equipped labs • # trained instructors • Lab attendance & use • Learning • Labor market • University enrollment Improve quality of instruction in higher education • Teacher training • Online courses • # of training sessions • # of internet terminals • Learning • Attendance/drop out • Labor market Monitoring vs. Impact Evaluation

  4. Motivation • Objective in evaluation is to estimate the CAUSAL effect of intervention X on outcome Y • What is the effect of a cash transfer on household consumption? • For causal inference we must understand the data generation process • For impact evaluation, this means understanding the behavioral process that generates the data • how benefits are assigned

  5. Causation versus Correlation • Recall: correlation is NOT causation • Necessary but not sufficient condition • Correlation: X and Y are related • Change in X is related to a change in Y • And…. • A change in Y is related to a change in X • Causation – if we change X how much does Y change • A change in X is related to a change in Y • Not necessarily the other way around

  6. Causation versus Correlation • Three criteria for causation: • Independent variable precedes the dependent variable. • Independent variable is related to the dependent variable. • There are no third variables that could explain why the independent variable is related to the dependent variable • External validity • Generalizability: causal inference to generalize outside the sample population or setting

  7. Motivation • The word cause is not in the vocabulary of standard probability theory. • Probability theory: two events are mutually correlated, or dependent  if we find one, we can expect to encounter the other. • Example age and income • For impact evaluation, we supplement the language of probability with a vocabulary for causality.

  8. Statistical Analysis & Impact Evaluation • Statistical analysis: Typically involves inferring the causal relationship between X and Y from observational data • Many challenges & complex statistics • Impact Evaluation: • Retrospectively: • same challenges as statistical analysis • Prospectively: • we generate the data ourselves through the program’s design  evaluation design • makes things much easier!

  9. How to assess impact • What is the effect of a cash transfer on household consumption? • Formally, program impact is: • α = (Y | P=1) - (Y | P=0) • Compare same individual with & without programs at same point in time • So what’s the Problem?

  10. Solving the evaluation problem • Problem: we never observe the same individual with and without program at same point in time • Need to estimate what would have happened to the beneficiary if he or she had not received benefits • Counterfactual: what would have happened without the program • Difference between treated observation and counterfactual is the estimated impact

  11. Estimate effect ofXonY • Compare same individual with & without treatment at same point in time (counterfactual): • Program impact is outcome with program minus outcome without program sick 10 days sick 2 days Impact = 2 - 10 = - 8 days sick!

  12. Finding a good counterfactual • The treated observation and the counterfactual: • have identical factors/characteristics, except for benefiting from the intervention • No other explanations for differences in outcomes between the treated observation and counterfactual • The only reason for the difference in outcomes is due to the intervention

  13. Measuring Impact • Tool belt of Impact Evaluation Design Options: • Randomized Experiments • Quasi-experiments • Regression Discontinuity • Difference in difference – panel data • Other (using Instrumental Variables, matching, etc) • In all cases, these will involve knowing the rule for assigning treatment

  14. Choosing your design • For impact evaluation, we will identify the “best” possible design given the operational context • Best possible design is the one that has the fewest risks for contamination • Omitted Variables (biased estimates) • Selection (results not generalizable)

  15. Case Study • Effect of cash transfers on consumption • Estimate impact of cash transfer on consumption per capita • Make sure: • Cash transfer comes before change in consumption • Cash transfer is correlated with consumption • Cash transfer is the only thing changing consumption • Example based on Oportunidades

  16. Oportunidades • National anti-poverty program in Mexico (1997) • Cash transfers and in-kind benefits conditional on school attendance and health care visits. • Transfer given preferably to mother of beneficiary children. • Large program with large transfers: • 5 million beneficiary households in 2004 • Large transfers, capped at: • $95 USD for HH with children through junior high • $159 USD for HH with children in high school

  17. Oportunidades Evaluation • Phasing in of intervention • 50,000 eligible rural communities • Random sample of of 506 eligible communities in 7 states - evaluation sample • Random assignment of benefits by community: • 320 treatment communities (14,446 households) • First transfers distributed April 1998 • 186 control communities (9,630 households) • First transfers November 1999

  18. Oportunidades Example

  19. Sick 2 days Sick 15 days Impact = 15 - 2 = 13 more days sick? Sick 2 days Sick 1 day Impact = 2 - 1 = + 1 day sick? Common Counterfeit Counterfactuals 2005 2007 • 1. Before and After: • 2. Enrolled / Not Enrolled:

  20. “Counterfeit” Counterfactual No. 1 • Before and after: • Assume we have data on • Treatment households before the cash transfer • Treatment households after the cash transfer • Estimate “impact” of cash transfer on household consumption: • Compare consumption per capita before the intervention to consumption per capita after the intervention • Difference in consumption per capita between the two periods is “treatment”

  21. Case 1: Before and After CPC • Compare Y before and after intervention • αi = (CPCit | T=1) - (CPCi,t-1| T=0) • Estimate of counterfactual • (CPCi,t| T=0) = (CPCi,t-1| T=0) • “Impact” = A-B Before After A B t-1 t Time

  22. Case 1: Before and After

  23. Case 1: Before and After CPC • Compare Y before and after intervention • αi = (CPCit | T=1) - (CPCi,t-1| T=0) • Estimate of counterfactual • (CPCi,t| T=0) = (CPCi,t-1| T=0) • “Impact” = A-B • Does not control for time varying factors • Recession: Impact = A-C • Boom: Impact = A-D Before After A D? B C? t-1 t Time

  24. “Counterfeit” Counterfactual No. 2 • Enrolled/Not Enrolled • Voluntary Inscription to the program • Assume we have a cross-section of post-intervention data on: • Households that did not enroll • Households that enrolled • Estimate “impact” of cash transfer on household consumption: • Compare consumption per capita of those who did not enroll to consumption per capita of those who enrolled • Difference in consumption per capita between the two groups is “treatment”

  25. Case 2: Enrolled/Not Enrolled

  26. Those who did not enroll…. • Impact estimate: αi = (Yit | P=1) - (Yj,t| P=0) , • Counterfactual: (Yj,t| P=0) ≠ (Yi,t| P=0) • Examples: • Those who choose not to enroll in program • Those who were not offered the program • Conditional Cash Transfer • Job Training program • Cannot control for all reasons why some choose to sign up & other didn’t • Reasons could be correlated with outcomes • We can control for observables….. • But are still left with the unobservables

  27. Impact Evaluation Example: Two Counterfeit Counterfactuals • What is going on?? • Which of these do we believe? • Problem with Before-After: • Can not control for other time-varying factors • Problem with Enrolled-Not Enrolled: • Do no know why the treated are treated and the others not

  28. Solution to the Couterfeit Counterfactual Sick 2 days Sick 10 days Observe Y with treatment ESTIMATE Y without treatment Impact = 2 - 10 = - 8 days sick! On AVERAGE, is a good counterfactual for

  29. Possible Solutions… • We need to understand the data generation process • How beneficiaries are selected and how benefits are assigned • Guarantee comparability of treatment and control groups, so ONLY difference is the intervention

  30. Measuring Impact • Experimental design/randomization • Quasi-experiments • Regression Discontinuity • Double differences (diff in diff) • Other options

  31. Choosing the methodology… • Choose the most robust strategy that fits the operational context • Use program budget and capacity constraints to choose a design, i.e. pipeline: • Universe of eligible individuals typically larger than available resources at a single point in time • Fairest and most transparent way to assign benefit may be to give all an equal chance of participating  randomization

  32. Randomization • The “gold standard” in impact evaluation • Give each eligible unit the same chance of receiving treatment • Lottery for who receives benefit • Lottery for who receives benefit first

  33. External Validity (sample) Randomization Randomization Internal Validity (identification) Randomization

  34. External & Internal Validity • The purpose of the first-stage is to ensure that the results in the sample will represent the results in the population within a defined level of sampling error (external validity). • The purpose of the second-stage is to ensure that the observed effect on the dependent variable is due to some aspect of the treatment rather than other confounding factors (internal validity).

  35. Case 3: Randomization • Randomized treatment/controls • Community level randomization • 320 treatment communities • 186 control communities • Pre-intervention characteristics well balanced

  36. Baseline characteristics

  37. Case 3: Randomization

  38. Impact Evaluation Example: No Design vs. Randomization

More Related