240 likes | 314 Views
Learn the essential guidelines for conducting randomized evaluations effectively to produce reliable results in policy research. The outline covers design, implementation, analysis, measurement, monitoring, attrition, and more. Avoid common pitfalls such as underpowered evaluations, biased measurement, treatment integrity issues, and insufficient statistical analyses. Enhance the quality of your research with practical tips and insights from a case study in Peru.
E N D
Randomized Evaluation: Dos and Don’ts An example from Peru Tania Alfonso Training Director, IPA
Outline • Design • Implementation • Analysis
Outline • Design • Research question • Power • Randomization • Sampling • Implementation • Analysis
Research question • Do make sure the research question is policy relevant • Do make sure your indicators are answering your research question
Power • Don’t conduct an under-powered evaluation • What does it mean to be under-powered? • Sample size and power
Power • Do power calculations first • Effect size • Sample size • Getting data • (What will take-up be?)
Power • Do cluster your standard errors when doing power calculations • Bad examples (two districts, 10,000 people)
Randomization • Do Ensure balance • Stratification • Re-randomizing • Costs and benefits
Sampling • Do make sure your sampling frame is as close to your target population as possible • Effect size
Outline • Design • Implementation • Measurement • Monitoring • Attrition • Analysis
Measurement • Don’t collect data differently for treatment and control groups • Introducing bias
Measurement • Don’t use as your primary indicator something that may change with the intervention, even when the outcome does not
Monitoring • Do monitor your intervention to ensure the treatment groups are receiving the treatment, and control groups are not • Contamination
Monitoring • Do collect process indicators to unpack the black box
Attrition • Do whatever it takes to minimize attrition • Attrition bias
Outline • Design • Implementation • Analysis • Treatment integrity • Attrition • Final outcomes • Subgroup analyses • Covariates
Integrity of design“Once in treatment, always in treatment” • Don’t switch treatment or control status, based on compliance • Intention to Treat • Treatment on Treated
Attrition“Once in sample, always in sample” • Do not ignore “attritors” • Attrition bias
Attrition • Don’t relax just because rates of attrition are the same in treatment and control groups • How do we test • How do we know
Final outcomes • Don’t run regressions on 20 different outcomes and only report on 1 or 2 “significant impacts” • Do report on all outcomes
Sub-group analysis • Don’t run regressions on 20 different subgroups and only report on 1 or 2 “significant impacts”
Covariates • Do specify the regression(s) you plan to run beforehand • Do include covariates that you stratified on and those helpful for absorbing variance.
External Validity • Do be modest about the external validity of your results • Consider the context (needs assessment) • Consider the process (process evaluation)
Cost effectiveness • Do have listened to Iqbal’s lecture yesterday • Not sure if he is presenting or covered this…just a guess…