1 / 25

Day 2: How to Randomize

Day 2: How to Randomize. Lecture #2 Prof. Karlan. Outline. Timeline of randomization Convincing practitioners Examples Schools in Kenya Balsakhi program Credit with education Credit expansion/scoring. Randomization Timeline. Planning Identify problem and proposed solution

aurora
Download Presentation

Day 2: How to Randomize

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Day 2: How to Randomize Lecture #2 Prof. Karlan

  2. Outline • Timeline of randomization • Convincing practitioners • Examples • Schools in Kenya • Balsakhi program • Credit with education • Credit expansion/scoring

  3. Randomization Timeline • Planning • Identify problem and proposed solution • Identify key players • Design randomization strategy • Define data collection plan • Define timeline • Pilot • Implementation • Collect data needed for randomization • Identify “target” population (sample frame) • Randomize • Implement intervention to treatment group • Measure outcomes • Continue to measure outcomes (?)

  4. Randomization Timeline: Planning • Identify problem and proposed solution • Define the problem (which should lead to the key hypotheses) • Define the intervention (sometimes already established) • Define sub-questions (would program implementation strategy X work better than strategy Y) • E.g.: Balsakhi & Kenya ETP projects • E.g.: Credit with Education • Identify key players • Top management • Field staff • Donors

  5. Randomization Timeline: Planning • Design randomization strategy • Basic strategy • Sample frame • Unit of randomization • Stratification

  6. Randomization Timeline: Planning • Define data collection plan • Should we collect baseline data? When? • Stratify effectively • Identify target participants, so that treatment group & control group only include target participants. (Think Balsakhi versus ETP in Kenya). • Learn about impact across distribution of previous status (does the program help on average versus does the program help the poorest more than it helps the not-so-poorest) • Recall questions? What risks do recall questions have? • Data needed for stratification • Baseline measures (?) • Outcome measures

  7. Randomization Timeline: Planning • Define data collection plan (continued) • Timing • Two logistical issues dominate here. • How long will it take to complete data entry and cleaning so that it can be used for the stratification? • Can you collect all baseline information at once? • Perhaps stratification data separate from full baseline data? • Examples • Slow process: • Philippines, expansion of credit program • Fast process: • Kenya ETP, Balsakhi, Credit with Education

  8. Randomization Timeline: Pilot • Pilots vary in size & rigor • Common timeline (in microcredit): • Identify problem & propose solution • Conduct pilot on non-random group • Observe administrative outcomes & qualitative satisfaction of clients • Implement for entire institution • We are inserting a (huge) step between steps 3 and 4 above. • Pilots & qualitative steps are important. (Why?)

  9. Randomization Timeline: Implementation • Collect data needed for stratification & implementation • Full baseline? • Administrative data? • Identify “target” individuals (why is this important?) • E.g.: Kenya ETP program will do what Balsakhi program did not, have teachers identify which students are lagging • E.g.: Philippines credit expansion. Market research meeting identifies likely participants. THEN, randomize into treatment and control. • E.g.: Credit “encouragement” design… • Randomize • Implement intervention to treatment group • Measure outcomes • Continue to measure outcomes (?)

  10. Randomization Timeline: Implementation • Collect data needed for stratification & implementation • Identify “target” individuals • Randomize • Real-time randomization versus hard-coded randomization • Real-time harder for stratification, but sometimes necessary • Implement intervention to treatment group • Internal controls critical… nothing worse than doing all this work, but not having control on the field! • Measure impact after necessary delay to allow impact to occur • Common question: “How long should we wait?” • Answer is like the sample size question: “As long as we can” from the evaluator & public good perspective. Operational considerations must be traded off

  11. Convincing Practitioners • But I already know the answer… • (Between the lines): But I do not want to risk learning that we do not have impact. • Finding the right & willing partner • How are “willing” partners different?

  12. Convincing Practitioners • Three critical items: • Listen listen listen • Know the identity of key players and their perspectives • Know the business culture • Do they say yes now, but no in private?

  13. Convincing Practitioners • Typical specific concerns: • Gossip • Fairness • Cost • Human resource policies • Length of study • Politics • Experimenting ethics

  14. Convincing Practitioners • Typical concerns: • Gossip (contamination vs spillovers) • Will be discussed in more detail on Day 4. • In some settings, contamination is good. If the intervention is about information (e.g., allocation of school resources, school performance information, education on child nutrition or breastfeeding, de-worming etc.), then you want gossip. But you want to setup the experiment so that you can measure the total impact on those treated, both directly by you and indirectly by the gossip.

  15. Convincing Practitioners • Typical concerns: • Fairness • Rare is the case of unlimited resources. • Given unlimited resources, “fairness” is entirely in the framing: • Provide everyone a 50% probability of receiving treatment • Provide those who show up first a 100% probability of receiving treatment, and those who do not a 0% chance. • Randomization allows one to learn more about the ideal participants. If a non-random targeting method is used, one might be making a mistake hence not maximizing impact. • Example in Credit with Education. • FINCA Peru wanted to provide education add-in as a reward to the best clients. • One can imagine similar policies that are done as rewards (less frequent repayments, e.g., is a typical microcredit example). • Putting aside the incentive effects which might make this a good strategy (worthy of testing!), why is this flawed? • What if the BIGGEST impact from the business training is on the worst clients? Maybe what they need to make them good clients is some business training!

  16. Convincing Practitioners • Typical concerns: • Cost • Compare to ex-post evaluation • Which costs more for survey work? • Which has more downside risk in terms of yielding imprecise or biased results? • Which costs more in terms of consultant/researcher compensation? • Key question is not ex-post versus randomized trial, but evaluation versus no evaluation. When to evaluate?

  17. Convincing Practitioners • Typical concerns: • Human resource policies • HR policies often arise if salary is conditional on performance. Then, if the treatment influences performance, problems arise. Or, the treatment could present an alternative that threatens their job! • Kenya: resentment of government teachers, afraid the contract teachers will do well and that they will then lose their job. • Microcredit: incentive pay quite typical (interesting in its own right!) • Calibrate incentive system so that mean compensation is the same. • If employees perceive that the treatment is good (or bad) for them, they may put forth extra effort to see it work (or not). Monitoring employee behavior can provide information on whether this is happening.

  18. Convincing Practitioners • Typical concerns: • Length of study • Politics: • Not always possible. • Monitoring Corruption project (Olken, 2005) • Prob of audit from 4% to 100% • Theft falls by 8% of expenditures

  19. Convincing Practitioners • Typical concerns: • Experimenting ethics • Typical things heard: • Wrong to use people as guinea pigs. • If you think it works, then it is wrong to not treat everyone. • Answers: • Have you ever taken a prescription drug? • All limited initiatives are in some way an ‘experiment’, the difference here is that we are controlling the experiment in order to learn. • Limited resources. Allocation has to happen somehow. Why not learn while we are doing it? Produce a public good for future generations.

  20. Kenya Extra Teacher Program (ETP) • Comparison to Balsakhi program • Much higher student-teacher ratios • Prior & overlapping research solves exclusionary problem • Testing extra teacher versus remedial teacher

  21. Kenya Extra Teacher Program (ETP) • Experimental design: • 330 schools. 220 treatment, 110 comparison • All schools: students sorted into two ability levels, high and low. • 50% of treatment schools: students & teachers assigned randomly (e.g., ordered alphabetically and then every other one) to either Stream A or B • 50% of treatment schools: students of low ability more likely to be in Stream B. Teachers assigned randomly to streams. Hence, Stream A > B in prior performance.

  22. Kenya Extra Teacher Program (ETP)

  23. Kenya Extra Teacher Program (ETP) • Program could generate impact in three ways: • Impact of reduced class size • Working with extra-teacher affected by her performance • Peer effects: Tracking students by prior performance could be good (or bad)

  24. Credit with Education • Key questions: • If you train credit officers to provide health & business education, do outcomes for the institution and the client improve? • Sub-question: Is the impact because you trained the credit officer, or because the credit officer provided the lectures to the clients? • Randomization level: • By center? • By credit officer? • Stratify by credit officer?

  25. ETP Questions • At what level should the randomization occur? • Is it necessary to use a phase-in model or an original randomization design as in the balsakhi or will a simple lottery suffice? • Is it necessary to stratify? If so, by what characteristics? • In addition to simply providing smaller classes, what can be done about the need to divide students into two smaller classes in those schools that will be treated? (Hint: consider a randomized experiment within treatment schools) • What problems may arise if local teachers (or government teachers) are always teaching a certain class type? Is there a fairness issues in assigning teachers? • If students are divided strictly by ability, might there be parental opposition? How else could you divide students • When should teachers be asked to divide the class by ability? Which teachers in the sample should be asked to do this exercise?

More Related