1 / 34

What is Randomized Evaluation? Why Randomize?

What is Randomized Evaluation? Why Randomize?. J-PAL South Asia, April 29, 2011. Life Cycle of an Evaluation. < Needs assessment > <~~~~~~~~~~~ Process Evaluation ~~~~~~ ~ ~~~~~> < ~~~~~~~ ~~ Impact Evaluation ~~~~~~ ~~>

jaron
Download Presentation

What is Randomized Evaluation? Why Randomize?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What is Randomized Evaluation?Why Randomize? J-PAL South Asia, April 29, 2011

  2. Life Cycle of an Evaluation • < Needs assessment > <~~~~~~~~~~~ Process Evaluation ~~~~~~~~~~~~> <~~~~~~~~~ Impact Evaluation ~~~~~~~~> Other types of Evaluations: Review / Cost-benefit analysis / Cost-effectiveness Analysis

  3. Balsakhi Program

  4. What was the issue? • Class sizes are large • Many children in 3rd and 4th standard are not even at the 1st standard level of competency • Social distance between teacher and some of the students is large

  5. Proposed solution • Hire local women (balsakhis) from the community • Train them to teach remedial competencies • Basic literacy, numeracy • Identify lowest performing 3rd and 4th standard students • Take these students out of class (2 hours/day) • Balsakhi teaches them basic competencies

  6. Possible outcomes Pros Cons Less qualified Teacher resentment Reduced interaction with higher-performing peers • Reduced class size • Teaching at appropriate level • Reduced social distance • Improved learning for lower-performing students • Improved learning for higher-performers

  7. Study design • We want to look at impact • What do we do?

  8. What is the impact of the balsakhi? • What would have happened without the balsakhi program?

  9. Pre-post (Before vs. After) • Look at average changein test scores over the school year for the balsakhi children

  10. What was the impact of the balsakhi program? Before vs. After Impact = 26.42 points? 75 50 25 0 0 26.42 points? 2002 2003

  11. Impact: What is it? Intervention Impact Primary Outcome Counterfactual Time

  12. Impact: What is it? Counterfactual Impact Intervention Primary Outcome Time

  13. Impact: What is it? Intervention Primary Outcome Impact Counterfactual Time

  14. How to measure impact? Impact is defined as a comparison between: • the outcome some time after the program has been introduced • the outcome at that same point in time had the program not been introduced the ”counterfactual”

  15. Impact evaluation methods • Randomized Experiments • Also known as: • Random Assignment Studies • Randomized Field Trials • Social Experiments • Randomized Controlled Trials (RCTs) • Randomized Controlled Experiments

  16. Impact evaluation methods 2. Non- or Quasi-Experimental Methods a. Pre-Post • Simple Difference • Differences-in-Differences • Multivariate Regression • Statistical Matching • Interrupted Time Series • Instrumental Variables • Regression Discontinuity

  17. Other Methods • There are more sophisticated non-experimental methods to estimate program impacts: • Multivariable Regression • Matching • Instrumental Variables • Regression Discontinuity • These methods rely on being able to “mimic” the counterfactual under certain assumptions • Problem: Assumptions are not testable

  18. Methods to estimate impacts • We constructed results from the different ways of estimating the impacts using the data from the schools that got a balsakhi • Pre – Post (Before vs. After) • Simple difference • Difference-in-difference • Other non-experimental methods • Randomized Experiment

  19. II – What is a randomized experiment?The Balsakhi Example povertyactionlab.org

  20. The basics Start with simple case: • Take a sample of schools • Randomly assign them to either: • TreatmentGroup– is offered treatment • Control Group- not allowed to receive treatment (during the evaluation period)

  21. 6 – Randomized Experiment • Suppose we evaluated the balsakhi program using a randomized experiment • QUESTION #1: What would this entail? How would we do it? • QUESTION #2: What would be the advantage of using this method to evaluate the impact of the balsakhi program? Source: www.theoryofchange.org

  22. Random assignment in Vadodara 2006 Baseline Test Scores 20 10 0 28 28 Treat Compare

  23. Key advantage of experiments Because members of the groups (treatment and control) do not differ systematically at the outset of the experiment, any difference that subsequently arises between them can be attributed to the program rather than to other factors. 23

  24. Constraints of design • Pratham had received funding to introduce balsakhis in all schools • Principals wanted balsakhis in their school • Teachers wanted balsakhis in their class • If denied a balsakhi, they may block data collection efforts

  25. Random assignment

  26. Rotation design Round 1 Red: Std. 3 Blue: Std. 4 Round 1 Red: Std. 3 Blue: Std. 4 Round 2 Std 3 from Round 1  Std 4 in Round 2 —————————————————————————— Std 4 from Round 1  Std 3 in Round 2

  27. Rotation design Comparison Red: Treatment Blue: Control Std 3 Std 4 Round 1 Red: Std. 3 Blue: Std. 4

  28. The counterfactual:

  29. Key steps in conducting an experiment 1. Design the study carefully 2. Collect baseline data 3. Randomly assign people to treatment or control 4. Verify that assignment looks random 5. Monitor process so that integrity of experiment is not compromised

  30. Key steps in conducting an experiment (cont.) 6. Collect follow-up data for both the treatment and control groups 7. Estimate program impacts by comparing mean outcomes of treatment group vs. mean outcomes of control group. 8. Assess whether program impacts are statistically significant and practically significant.

  31. Impact of Balsakhi - Summary *: Statistically significant at the 5% level

  32. Impact of Balsakhi - Summary *: Statistically significant at the 5% level

  33. Impact of Balsakhi - Summary *: Statistically significant at the 5% level Bottom Line: Which method we use matters!

  34. Results • This program improved average test score by nearly 10% • The lowest-performing kids in balsakhi schools improved the most (relative to the control group) • Highly cost effective: intervention cost less than Rs.100 per child and results far outweigh the cost • Can be scalable, especially in light of RTE Act: remedial education

More Related