Methodological Issues in Systematic Reviews in Education

1 / 22

# Methodological Issues in Systematic Reviews in Education - PowerPoint PPT Presentation

Methodological Issues in Systematic Reviews in Education. Robert E. Slavin Institute for Effective Education University of York. Sample Size and Effect Size. Negative correlation noted in other fields Reasons: Underpowered studies with null results disappear

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## Methodological Issues in Systematic Reviews in Education

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
1. Methodological Issues in Systematic Reviews in Education Robert E. Slavin Institute for Effective Education University of York

2. Sample Size and Effect Size • Negative correlation noted in other fields • Reasons: • Underpowered studies with null results disappear • Small studies of lower methodological quality • Superrealization bias • Measures aligned with treatments

3. Present Studies: Best Evidence Encyclopedia • Elementary and secondary math • 185 qualifying studies • Studies with inherent measures, brief durations, big pretest differences excluded

4. Table 1 Total Sample Size RecodeRangeNumber of Studies 1 Up to 50 10 2 51-100 36 3 101-150 18 4 151-250 31 5 251-400 14 6 401-1000 41 7 1001-2000 12 8 2001 or more 23 TOTAL185

5. Notable Findings • Overall correlation: -.28, p<.001 • Sample sizes ≤100: ES= +0.40 • Sample sizes > 2000: ES= +0.09 • Random: ES=+0.24 • Randomized quasi-experiments: ES=+0.29 • Matched: ES= +0.17 • Difference disappears when sample size considered

6. Possible Solutions • Weight by sample size • Require minimum sample size for high ratings • BEE requires 500 students in 2+ studies

7. Implications • Results from large studies should be preferred, all else being equal • Such results tend to be modest. We should look for outcomes of +0.20 to +0.30, at best

8. Treatment-Inherent Measures • Experimenter-made • Assess outcomes emphasized in experimental but not control group

9. Treatment-Independent Measures • Usually standardized tests • May be experimenter-made if experimental and control groups received the same content

10. How Do Program Effectiveness Reviews Treat Inherent Measures? • What Works Clearinghouse includes • Best Evidence Encyclopedia excludes

11. Curriculum vs. Instruction • Legitimate need to measure and report outcomes emphasized in experimental group • But, potential bias introduced if inherent measures averaged with independent measures • How much bias?

12. Implications • Treatment-inherent measures must be excluded from reviews, or at least reported separately • Clear distinction between inherent and independent measures can be made

13. Implications in Light of Findings on Sample Size and Duration • Random assignment cannot be the only criterion of evaluation excellence • Effect sizes from large, extended studies of school and classroom interventions with independent measures are modest (+0.20 to +0.30 at best). These are the effects we should be looking for.