1 / 23

Experimental Methods

Experimental Methods. Muna Meky Economist Africa Impact Evaluation Initiative. Motivation. Objective in evaluation is to estimate the CAUSAL effect of intervention X on outcome Y What is the effect of a housing upgrade on household income ?

gilon
Download Presentation

Experimental Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative

  2. Motivation • Objective in evaluation is to estimate the CAUSAL effect of intervention X on outcome Y • What is the effect of a housing upgrade on household income? • For causal inference, we need to understand exactly how benefits are distributed • Assigned / targeted • Take-up

  3. Causation versus Correlation • Correlation is NOT causation • Necessary but not sufficient condition • Correlation: X and Y are related • Change in X is related to a change in Y • And…. • A change in Y is related to a change in X • Example: age and income • Causation – if we change X how much does Y change • A change in X is related to a change in Y • Not necessarily the other way around

  4. Causation versus Correlation Three criteria for causation: • Independent variable precedes the dependent variable. • Independent variable is related to the dependent variable. • There are no third variables that could explain why the independent variable is related to the dependent variable.

  5. Statistical Analysis & Impact Evaluation • Statistical analysis: Typically involves inferring the causal relationship between X and Y from observational data • Many challenges & complex statistics • We never know if we’re measuring the true impact • Impact Evaluation: • Retrospectively: • same challenges as statistical analysis • Prospectively: • we generate the data ourselves through the program’s design  evaluation design • makes things much easier!

  6. How to assess impact • What is the effect of a housing upgrade on household income? • Ideally, compare same individual with & without programs at same point in time • What’s the problem? • The need for a good counterfactual • What are the requirements?

  7. Case study: housing upgrade • Informal settlement of 15,000 households • Goal: upgrade housing of residents • Evaluation question: What is the impact of upgrading housing on household income? on employment? • Counterfeit counterfactuals

  8. Gold standard:Experimental design • Only method that ensures balance in unobserved (and observed) characteristics • Only difference is treatment • Equal chance of assignment into treatment and control for everyone • With large sample, all characteristics average out • Experimental design = Randomized evaluation

  9. “Random” • What does the term “random” mean here? • Equal chance of participation for everyone • How could one really randomize in the case of housing upgrading? • Options • Lottery • Lottery among the qualified • Phase-in • Encouragement • Randomize across treatments

  10. Kinds of randomization • Random selection: external validity • Ensure that the results in the sample represent the results in the population • What does this program tell us that we can apply to the whole country? • Random assignment: internal validity • Ensure that the observed effect on the outcome is due to the treatment rather than other factors • Does not inform scale-up without assumptions • Example: Housing upgrade in Western Cape vs Sample from across country

  11. External vs Internal External Validity (sample) Randomization Randomization Internal Validity (identification)

  12. Example of Randomization • What is the impact of providing free books to students on test scores? • Randomly assign a group of school children to either: - TreatmentGroup – receives free books - Control Group – does not receive free books

  13. Randomization Random Assignment

  14. How Do You Randomize? • At what level? • Individual • Group • School • Community • District

  15. When would you use randomization? • Universe of eligible individuals typically larger than available resources at a single point in time • Fair and transparent way to assign benefits • Gives an equal chance to everyone in the sample • Good times to randomize: • Pilot programs • Programs with budget/capacity constraints • Phase in programs

  16. Basic Setup of an Experimental Evaluation All informal settlement dwellers Communities that might participate or a targeted sub-group Select those you want to work with right now Based on Orr (1999)

  17. Examples…

  18. Beyond simple random assignment • Assigning to multiple treatment groups • Treatment 1, Treatment 2, Control • Upgrade housing in situ, relocation to better housing, control • What do we learn? • Assigning to units other than individuals or households • Health Centers (bed net distribution) • Schools (teacher absenteeism project) • Local Governments (corruption project) • Villages (Community-driven development projects)

  19. Unit of randomization • Individual or household randomization is lowest cost option • Randomizing at higher levels requires much bigger samples: within-group correlation • Political challenges to unequal treatment within a community • But look for creative solutions: e.g., uniforms in Kenya • Some programs can only be implemented at a higher level • e.g., strengthening school committees

  20. Efficacy & Effectiveness • Efficacy • Proof of Concept • Pilot under ideal conditions • Effectiveness • At scale • Normal circumstances & capabilities • Lower or higher impact? • Higher or lower costs?

  21. Advantages of experiments • Clear causal impact • Relative to other studies • Much easier to analyze • Cheaper! (smaller sample sizes) • Easier to convey • More convincing to policymakers • Not methodologically controversial

  22. What if randomization isn’t possible? It probably is… • Budget constraints: randomize among the needy • Roll-out capacity: randomize who receives first • Randomly promote the program to some

  23. When is it reallynot possible? • The treatment has already been assigned and announced and no possibility for expansion of treatment • The program is over (retrospective) • Universal eligibility and universal access • Example: free education, exchange rate regime

More Related