sebastian martinez impact evaluation cluster aftrl n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Impact Evaluation Methods: Causal Inference PowerPoint Presentation
Download Presentation
Impact Evaluation Methods: Causal Inference

Loading in 2 Seconds...

play fullscreen
1 / 30

Impact Evaluation Methods: Causal Inference - PowerPoint PPT Presentation


  • 146 Views
  • Uploaded on

Sebastian Martinez Impact Evaluation Cluster, AFTRL. Impact Evaluation Methods: Causal Inference. Slides by Paul J. Gertler & Sebastian Martinez. Motivation. “Traditional” M&E: Is the program being implemented as designed? Could the operations be more efficient?

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Impact Evaluation Methods: Causal Inference' - ima


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
sebastian martinez impact evaluation cluster aftrl
Sebastian Martinez

Impact Evaluation Cluster, AFTRL

Impact Evaluation Methods: Causal Inference

Slides by Paul J. Gertler & Sebastian Martinez

slide2

Motivation

  • “Traditional” M&E:
      • Is the program being implemented as designed?
      • Could the operations be more efficient?
      • Are the benefits getting to those intended?
    • Monitoring trends
      • Are indicators moving in the right direction?
    •  NO inherent Causality
  • Impact Evaluation:
    • What was the effect of the program on outcomes?
    • Because of the program, are people better off?
    • What would happen if we changed the program?
    •  Causality
slide3

Policy

Intervention

Monitoring

Impact Evaluation

Increase Access and Quality in Early Child Education

  • Construction
  • Feeding
  • Quality

-New classrooms

-SES of students

  • # of Meals
  • Use of curriculum

-Increased attendance

  • health/growth
  • Cognitive Development

Improve learning in Science and Math in high school

  • Upgrade science laboratories
  • Training of instructors

- # equipped labs

  • # trained instructors
  • Lab attendance & use
  • Learning
  • Labor market
  • University enrollment

Improve quality of instruction in higher education

  • Teacher training
  • Online courses
  • # of training sessions
  • # of internet terminals
  • Learning
  • Attendance/drop out
  • Labor market
motivation
Motivation
  • Objective in evaluation is to estimate the CAUSAL effect of intervention X on outcome Y
    • What is the effect of a cash transfer on household consumption?
  • For causal inference we must understand the data generation process
    • For impact evaluation, this means understanding the behavioral process that generates the data
      • how benefits are assigned
causation versus correlation
Causation versus Correlation
  • Recall: correlation is NOT causation
    • Necessary but not sufficient condition
    • Correlation: X and Y are related
      • Change in X is related to a change in Y
      • And….
      • A change in Y is related to a change in X
    • Causation – if we change X how much does Y change
      • A change in X is related to a change in Y
      • Not necessarily the other way around
causation versus correlation1
Causation versus Correlation
  • Three criteria for causation:
    • Independent variable precedes the dependent variable.
    • Independent variable is related to the dependent variable.
    • There are no third variables that could explain why the independent variable is related to the dependent variable
  • External validity
    • Generalizability: causal inference to generalize outside the sample population or setting
motivation1
Motivation
  • The word cause is not in the vocabulary of standard probability theory.
    • Probability theory: two events are mutually correlated, or dependent  if we find one, we can expect to encounter the other.
  • Example age and income
  • For impact evaluation, we supplement the language of probability with a vocabulary for causality.
statistical analysis impact evaluation
Statistical Analysis & Impact Evaluation
  • Statistical analysis: Typically involves inferring the causal relationship between X and Y from observational data
    • Many challenges & complex statistics
  • Impact Evaluation:
    • Retrospectively:
      • same challenges as statistical analysis
    • Prospectively:
      • we generate the data ourselves through the program’s design  evaluation design
      • makes things much easier!
how to assess impact
How to assess impact
  • What is the effect of a cash transfer on household consumption?
  • Formally, program impact is:

α = (Y | P=1) - (Y | P=0)

  • Compare same individual with & without programs at same point in time
  • So what’s the Problem?
solving the evaluation problem
Solving the evaluation problem
  • Problem: we never observe the same individual with and without program at same point in time
  • Need to estimate what would have happened to the beneficiary if he or she had not received benefits
  • Counterfactual: what would have happened without the program
  • Difference between treated observation and counterfactual is the estimated impact
estimate effect of x on y
Estimate effect ofXonY
  • Compare same individual with & without treatment at same point in time (counterfactual):
  • Program impact is outcome with program minus outcome without program

sick 10 days

sick 2 days

Impact = 2 - 10 = - 8 days sick!

finding a good counterfactual
Finding a good counterfactual
  • The treated observation and the counterfactual:
    • have identical factors/characteristics, except for benefiting from the intervention
    • No other explanations for differences in outcomes between the treated observation and counterfactual
  • The only reason for the difference in outcomes is due to the intervention
measuring impact
Measuring Impact

Tool belt of Impact Evaluation Design Options:

  • Randomized Experiments
  • Quasi-experiments
    • Regression Discontinuity
    • Difference in difference – panel data
    • Other (using Instrumental Variables, matching, etc)
  • In all cases, these will involve knowing the rule for assigning treatment
choosing your design
Choosing your design
  • For impact evaluation, we will identify the “best” possible design given the operational context
  • Best possible design is the one that has the fewest risks for contamination
    • Omitted Variables (biased estimates)
    • Selection (results not generalizable)
case study
Case Study
  • Effect of cash transfers on consumption
  • Estimate impact of cash transfer on consumption per capita
    • Make sure:
      • Cash transfer comes before change in consumption
      • Cash transfer is correlated with consumption
      • Cash transfer is the only thing changing consumption
  • Example based on Oportunidades
oportunidades
Oportunidades
  • National anti-poverty program in Mexico (1997)
  • Cash transfers and in-kind benefits conditional on school attendance and health care visits.
  • Transfer given preferably to mother of beneficiary children.
  • Large program with large transfers:
    • 5 million beneficiary households in 2004
    • Large transfers, capped at:
      • $95 USD for HH with children through junior high
      • $159 USD for HH with children in high school
oportunidades evaluation
Oportunidades Evaluation
  • Phasing in of intervention
    • 50,000 eligible rural communities
    • Random sample of of 506 eligible communities in 7 states - evaluation sample
  • Random assignment of benefits by community:
    • 320 treatment communities (14,446 households)
      • First transfers distributed April 1998
    • 186 control communities (9,630 households)
      • First transfers November 1999
common counterfeit counterfactuals
Common Counterfeit Counterfactuals

1. Before and After:

2. Enrolled /

Not Enrolled:

2005

2007

Sick 2 days

Sick 15 days

Impact = 15 - 2 = 13 more days sick?

Sick 2 days

Sick 1 day

Impact = 2 - 1 = + 1 day sick?

counterfeit counterfactual number 1
“Counterfeit” CounterfactualNumber 1
  • Before and after:
    • Assume we have data on
      • Treatment households before the cash transfer
      • Treatment households after the cash transfer
    • Estimate “impact” of cash transfer on household consumption:
      • Compare consumption per capita before the intervention to consumption per capita after the intervention
      • Difference in consumption per capita between the two periods is “treatment”
case 1 before and after
Case 1: Before and After
  • Compare Y before and after intervention

αi = (CPCit | T=1) - (CPCi,t-1| T=0)

  • Estimate of counterfactual

(CPCi,t| T=0) = (CPCi,t-1| T=0)

  • “Impact” = A-B

CPC

Before

After

A

B

t-1

t

Time

case 1 before and after1

Case 1 - Before and After

Control - Before

Treatment - After

t-stat

Mean

233.48

268.75

16.3

Case 1 - Before and After

Linear Regression

Multivariate Linear Regression

35.27**

34.28**

Estimated Impact on CPC

(2.16)

(2.11)

** Significant at 1% level

Case 1: Before and After
case 1 before and after2
Case 1: Before and After
  • Compare Y before and after intervention

αi = (CPCit | T=1) - (CPCi,t-1| T=0)

  • Estimate of counterfactual

(CPCi,t| T=0) = (CPCi,t-1| T=0)

  • “Impact” = A-B
  • Does not control for time varying factors
    • Recession: Impact = A-C
    • Boom: Impact = A-D

CPC

Before

After

A

D?

B

C?

t-1

t

Time

counterfeit counterfactual number 2
“Counterfeit” CounterfactualNumber 2
  • Enrolled/Not Enrolled
    • Voluntary Inscription to the program
    • Assume we have a cross-section of post-intervention data on:
      • Households that did not enroll
      • Households that enrolled
    • Estimate “impact” of cash transfer on household consumption:
      • Compare consumption per capita of those who did not enroll to consumption per capita of those who enrolled
      • Difference in consumption per capita between the two groups is “treatment”
case 2 enrolled not enrolled

Case 2 - Enrolled/Not Enrolled

Not Enrolled

Enrolled

t-stat

Mean CPC

290.16

268.7541

5.6

Case 2 - Enrolled/Not Enrolled

Linear Regression

Multivariate Linear Regression

-22.7**

-4.15

Estimated Impact on CPC

(3.78)

(4.05)

** Significant at 1% level

Case 2: Enrolled/Not Enrolled
those who did not enroll
Those who did not enroll….
  • Impact estimate: αi = (Yit | P=1) - (Yj,t| P=0) ,
  • Counterfactual: (Yj,t| P=0) ≠ (Yi,t| P=0)
  • Examples:
    • Those who choose not to enroll in program
    • Those who were not offered the program
      • Conditional Cash Transfer
      • Job Training program
  • Cannot control for all reasons why some choose to sign up & other didn’t
  • Reasons could be correlated with outcomes
  • We can control for observables…..
  • But are still left with the unobservables
impact evaluation example two counterfeit counterfactuals

Case 1 - Before and After

Case 2 - Enrolled/Not Enrolled

Linear

Multivariate Linear

Linear

Multivariate Linear

Regression

Regression

Regression

Regression

Estimated Impact

35.27**

34.28**

-22.7**

-4.15

on CPC

(2.16)

(2.11)

(3.78)

(4.05)

** Significant at 1% level

Impact Evaluation Example:Two counterfeit counterfactuals
  • What is going on??
  • Which of these do we believe?
  • Problem with Before-After:
    • Can not control for other time-varying factors
  • Problem with Enrolled-Not Enrolled:
    • Do no know why the treated are treated and the others not
solution to the counterfeit counterfactual
Solution to the Counterfeit Counterfactual

Sick 2 days

Sick 10 days

Observe Y with treatment

ESTIMATE Y without treatment

Impact = 2 - 10 = - 8 days sick!

On AVERAGE, is a good counterfactual for

possible solutions
Possible Solutions…
  • We need to understand the data generation process
    • How beneficiaries are selected and how benefits are assigned
  • Guarantee comparability of treatment and control groups, so ONLY difference is the intervention
measuring impact1
Measuring Impact
  • Experimental design/randomization
  • Quasi-experiments
    • Regression Discontinuity
    • Double differences (diff in diff)
    • Other options