1 / 13

Data Collection in the Context of RCTs

Data Collection in the Context of RCTs. James J. Kemple Executive Director, The Research Alliance for New York City Schools New York University. Prepared for the 2009 Meeting of the American Educational Research Association April 14, 2009 San Diego, CA. Topics.

schmid
Download Presentation

Data Collection in the Context of RCTs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Collection in the Context of RCTs James J. Kemple Executive Director, The Research Alliance for New York City Schools New York University Prepared for the 2009 Meeting of the American Educational Research Association April 14, 2009 San Diego, CA

  2. Topics • What makes an RCT special for data collection? • Design and start-up challenges • Goals for measurement and data collection in RCTs • Data sources: Benefits and limitations • Analysis issues

  3. What makes an RCT special? • Outcomes vs. impacts • Outcome = measure of behavior, experiences, perceptions, attitudes • Impact = the effect of an intervention on outcomes • Outcome measurement goals the same for experiments and non-experimental impact studies • Key Differences: • Non-experimental impact analyses are much more data-intensive because of need to model selection or construct statistical counterfactuals. • Central challenge for measurement and data collection in RCT is to prevent introduction of selection bias.

  4. Design and Start-Up Challenges • Critical guide to measurement and data collection: • Making the intervention’s theory of action explicit • Understanding the likely contrast with the counterfactual • Consensus-building and getting buy-in from participants and stakeholders • What are the mutual obligations and responsibilities? • What do the participants get out of it? • Informed Consent • IRB and human subjects protection • OMB clearance

  5. Goals for Measurement and Data Collection in RCTs • Baseline data • Describe sample identified for the intervention • Test for random differences across treatment and control groups • Add to precision of impact estimates • Identify subgroups • Response analysis for follow-up data • Provide framework for potential generalizabilty of findings

  6. Goals for Measurement and Data Collection in RCTs (cont.) • Measuring implementation fidelity and context • Although not part of impact framework, still important to understand the nature of the intervention and its context • However, this is only half of the picture!! • Measuring treatment contrast • Dosage or exposure to treatment (and treatment-like services) by treatment and control groups. • Measuring experiences and behaviors that should be in close proximity to the intervention components for treatment and control groups. • Measuring Costs • Cost of intervention • Net cost relative to business-as-usual

  7. Goals for Measurement and Data Collection in RCTs (cont.) • Measuring outcomes • Striking a balance among: • Outcomes aligned with intervention theory of action • Outcomes that reflect contrast between treatment and counterfactual • Outcomes that are policy- and practice-relevant • Short term outcomes • Interim indications of effects (or lack of effects) • Mediators of longer-term effects • Longer-term outcomes • Bottom line • Cost-effectiveness framework

  8. Data Sources: Benefits and Limitations • Administrative records: Benefits • Generally equivalent coverage of treatment and control groups • Can be cost effective • High stakes measures and reporting often attached to accountability • Likely to reflect policy-relevant (or policy-driven) outcomes • Administrative records: Limitations • Measures may not be well-aligned with goals of intervention • May not capture implementation, dosage, and mediating outcomes • Uncommon measures across districts and states

  9. Data Sources: Benefits and Limitations (cont.) • Evaluation-administered assessments and surveys: Benefits • Common measures • Potential to align measures with goals of the intervention • Evaluation control over data collection protocols • Evaluation-administered assessments and survey: Limitations • Potential for bias in non-response, administration conditions, or data quality • Respondent burden • Expensive • Low-stakes and lack of incentive to take seriously

  10. Data Sources: Benefits and Limitations (cont.) • Classroom observations: Benefits • Common measures • Direct measurement of individual and group behavior • Alignment with goals of intervention • Classroom observations: Limitations • Potential for bias due to observer knowledge of intervention • Expensive • Potential for observer influence on classroom dynamics • Inter- and intra-rater reliability

  11. Analytic Issues • Minimum detectable effect sizes • Smallest impact that can be detected at given level of precision: function of sample size and variation • What is a policy- or practice-relevant impact? • Statistical power for subgroups • Focus data collection on outcomes and samples that reflect appropriate MDES and policy relevant effects

  12. Analytic Issues (cont.) • Multiple hypothesis testing • Large number of outcomes increases risk of Type 2 error (false positive) • Parsimonious list of outcomes for confirmatory hypotheses • Expanded list for exploratory hypotheses • Non-response bias • Respondents vs. non-Respondents • Treatment vs. Control among respondents

  13. Contact Information James Kemple Executive Director Research Alliance for New York City Schools New York University Email: james.kemple@nyu.edu Website: http://steinhardt.nyu.edu/ranycs/

More Related