1 / 34

Systematic Reviews: The Potential of Meta-analysis

Systematic Reviews: The Potential of Meta-analysis. ESRC Research Methods Festival Oxford 5 th July, 2012 Professor Steven Higgins Durham University s.e.higgins@durham.ac.uk. What is meta-analysis?. A way of combining the results of quantitative research

gunda
Download Presentation

Systematic Reviews: The Potential of Meta-analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Systematic Reviews: The Potential of Meta-analysis ESRC Research Methods Festival Oxford 5th July, 2012 Professor Steven Higgins Durham University s.e.higgins@durham.ac.uk

  2. What is meta-analysis? • A way of combining the results of quantitative research • To accumulate evidence from smaller studies • To compare results of similar studies - consistency • To investigate patterns of association in the findings of different studies – explaining variation • ‘Surveys’ research studies

  3. Key points • Understanding ‘effect-size’ as a common measure • Why do we need meta-analysis? • What are its limitations? • What is its potential?

  4. What is an “effect size”? • Standardised way of looking at difference • Different methods for calculation • Binary (Risk difference, Odds ratio, Risk ratio) • Continuous • Correlational (Pearson’s r) • Standardised mean difference (d, g, Δ) • Difference between control and intervention group as proportion of the dispersion of scores • Intervention group score – control group score / standard deviation of scores

  5. Examples of Effect Sizes: “Equivalent to the difference in heights between 15 and 16 year old girls” ES = 0.2 58% ofcontrol group below mean of experimental group Probability you could guess which group a person was in = 0.54 Change in the proportion above a given threshold: from 50% to 58% or from 75% to 81%

  6. ES = 0.8 “Equivalent to the difference in heights between 13 and 18 year old girls” 79% ofcontrol group below mean of experimental group Probability you could guess which group a person was in = 0.66 Change in the proportion above a given threshold: from 50% to 79% or from 75% to 93%

  7. The rationale for using effect sizes • Traditional quantitative reviews focus on statistical significance testing • Highly dependent on sample size • Null finding does not carry the same “weight” as a significant finding • Meta-analysis focuses on the direction and magnitude of the effects across studies • From “Is there a difference?” to “How big is the difference?”and“How consistent is the difference?” • Direction and magnitude represented by “effect size”

  8. Meta-analysis • Synthesis of quantitative data • Cumulative • Comparative • Correlational • “Surveys” educational research (Lipsey and Wilson, 2001)

  9. Forest plots • Effective way of presenting results • Studies, effect sizes, confidence intervals • Provides an overview of consistency of effects • Summarises an overall effect (with confidence interval) • Useful visual model of a meta-analysis

  10. Anatomy of a forest plot… Line of no effect N of study Study effect size (with C.I.) C.I Studies Study effect size Weighting of study in meta-analysis Pooled effect size Pooled effect size

  11. Issues and challenges in meta-analysis • Conceptual • Reductionist - the answer is .42 • Comparability - apples and oranges • Atheoretical - ‘flat-earth’ • Technical • Heterogeneity • Publication bias • Methodological quality

  12. Some recent findings from meta-analysis in education Klauer& Phye 2008 • 74 studies, 3,600 children - training in inductive reasoning improves academic performance (0.69) more than intelligence test performance (0.52). Gerstenet al. 2009 • Maths interventions for low attainers - 42 studies ES ranging from 0.21-1.56. Teaching heuristics and explicit instruction particularly beneficial. Domino 2010 • 31 studies / 5288 - pupils those who used manipulatives during mathematics instruction had higher mathematics achievement than students who were taught by traditional teaching methods - effect size 0.50 (CI 0.34 to 0.65)

  13. Methodological heterogeneity • Study design • Sample characteristics • Assessment (measures, timing)

  14. Educational heterogeneity • ‘Clinical’or ‘pedagogical’ heterogeneity • Systematic variation in response to the intervention • Teacher level effects • Pupil level effects

  15. Statistical • Due to chance • Unexplainable

  16. Statistical methods to identify heterogeneity • Presence • Q statistic (Cooper & Hedges, 1994) • Significance level (p-value) • 2 • 2 • Extent • I2 (Higgins & Thompson, 2002) • If it exceeds 50%, it may be advisable not to combine the studies All have low power with a small number of studies (Huedo-Medina et al. 2006)

  17. Exploring heterogeneity • In a meta-analysis, exploring heterogeneity of effect can be as or even more important than reporting averages • Exploring to what extent the variation can be explained by factors in the coding of studies (age, gender, duration of intervention etc) through regression • Forming sub-groups with greater homogeneity • Identifying the extent of the variation through further analysis

  18. Coding for exploration • Factors which may relate to variation • The intervention • E.g. duration, intensity, design, implementation • The sample • E.g. age, gender, ethnicity, particular needs • The research • E.g. design (RCT, quasi-experimental), quality, tests/outcomes, comparison group

  19. Pooling the results • In a meta-analysis, the effects found across studies are combined or ‘pooled’ to produce a weighted average effect of all the studies-the summary effect. • Each study is weighted according to some measure of its importance. • In most meta-analyses, this is achieved by giving a weight to each study in inverse proportion to the variance of its effect.

  20. Fixed effect model • The difference between the studies is due to chance • Observed study effect = Fixed effect + error

  21. Fixed effect model Each study is seen as being a sample from a distribution of studies, all estimating the same overall effect, but differing due to random error

  22. Random effects model Assumes there are two component of variation • Due to differences within the studies (e.g. different design, different populations, variations in the intervention, different implementation, etc.) • Due to sampling error

  23. Random effects model Each study is seen as representing the mean of a distribution of studies There is still a resultant overall effect size

  24. Which model? • “Random effects” model assumes a different underlying effect for each study. • This model gives relatively more weight to smaller studies and wider confidence intervals than fixed effect models. • The use of this model is recommended if there is heterogeneity between study results. • Also recommended as it provides a more conservative estimate for the pooled effect.

  25. Exploring heterogeneity • Conceptual: are the studies sufficiently similar in terms of the intervention or treatment? • Statistical: greater variation than would be predicted

  26. Sensitivity analysis • Provides feedback about whether assumptions and decisions made during the meta-analysis have had a major effect on the results • Repeats the analysis using different assumptions (as a quality check to make sure results are consistent) e.g. • Effect of including and excluding low quality studies • Excluding and including outliers • Undertaking fixed effect and and random-effects analyses

  27. Meta-regression • Examines the impact of moderator variables on pooled effect size using regression-based techniques. • Estimates the extent to which covariates (e.g. age, intervention length) can explain between study heterogeneity • If covariate is not associated with heterogeneity then it will not be significant in the regression

  28. Interpreting review findings • The standardised mean difference represents the amountof a standard deviation that the two groups differ by • Can therefore be converted back to a more ‘user-friendly’ metric. For example • fruit and vegetable consumption was found to have increased by a standardised mean difference of 0.65 • If, on baseline fruit and vegetable consumption was measured as being 2.4 portions per day with a standard deviation of 0.9, we can say that the intervention increased consumption by 0.585 portions, or from 2.4 to nearly 3 portions per day

  29. Cumulative meta-analysis Meta-analysis can have powerful applications e.g. detecting changes in paradigms Nykänen H & Koricheva J (2004) Damage-induced changes in woody plants and their effects on insect herbivore performance: a meta-analysis. Oikos, 104, 247-268. They measured the responses of woody plants to natural or simulated damage. “Cumulative meta-analyses revealed dramatic temporal changes in the magnitude and direction of the plant and herbivore responses reported during the last two decades.” Not a change in plant behaviour: a change in human understanding of them (and thus a change in measurement practices).

  30. Comparative meta-analysis • Theory testing • Practical value

  31. Summary • Meta-analysis is only as good as the systematic review in which it is located • Systematic bias in search strategy can lead to invalid results • Sensitivity analyses are essential in order to explore the robustness of the findings • Heterogeneity must be examined • A statistical method for combining the quantitative results of primary studies • Cumulative • Comparative • Meta-analysis overcomes a lack of statistical power in small primary studies • Offers a more precise estimate of effect • Offers a way to explore systematic variation • Can settle controversies from apparently conflicting studies or generate new hypotheses

  32. References, further readings and information Books and articles Borenstein, M., Hedges, L.V., Higgins, J.P.T. & Rothstein, H.R. (2009) Introduction to Meta Analysis (Statistics in Practice) Oxford: Wiley Blackwell. Chambers, E.A. (2004). An introduction to meta-analysis with articles from the Journal of Educational Research (1992-2002). Journal of Educational Research, 98, pp 35-44. Cooper, H.M. (1982) Scientific Guidelines for Conducting Integrative Research Reviews Review Of Educational Research 52; 291. *Cooper, H.M. (2009) Research Synthesis and meta-analysis: a step-by-step approach London: SAGE Publications (4th Edition). Cronbach, L. J., Ambron, S. R., Dornbusch, S. M., Hess, R.O., Hornik, R. C., Phillips, D. C., Walker, D. F., & Weiner, S. S. (1980). Toward reform of program evaluation: Aims, methods, and institutional arrangements. San Francisco, Ca.: Jossey-Bass. Glass, G.V. (2000). Meta-analysis at 25. Available at: http://glass.ed.asu.edu/gene/papers/meta25.html (accessed 9/9/08) Lipsey, Mark W., and Wilson, David B. (2001). Practical Meta-Analysis. Applied Social Research Methods Series (Vol. 49). Thousand Oaks, CA: SAGE Publications. Torgerson, C. (2003) Systematic Reviews and Meta-Analysis (Continuum Research Methods) London: Continuum Press. Websites What is an effect size?, by Rob Coe: http://www.cemcentre.org/evidence-based-education/effect-size-resources The meta-analysis of research studies: http://echo.edres.org:8080/meta/ The Meta-Analysis Unit, University of Murcia: http://www.um.es/metaanalysis/ The PsychWiki: Meta-analysis: http://www.psychwiki.com/wiki/Meta-analysis Meta-Analysis in Educational Research: http://www.dur.ac.uk/education/meta-ed/

  33. Acknowledgements • This presentation is an outcome of the work of the ESRC-funded Researcher Development Initiative: “Training in the Quantitative synthesis of Intervention Research Findings in Education and Social Sciences”which ran from 2008-2011. • The training was designed by Steve Higgins and Rob Coe (Durham University), Carole Torgerson (Birmingham University) and Mark Newman and James Thomas (Institute of Education, London University). • The team acknowledges the support of Mark Lipsey, David Wilson and Herb Marsh in preparation of some of the materials, particularly Lipsey and Wilson’s (2001) “Practical Meta-analysis” and David Wilson’s slides at: http://mason.gmu.edu/~dwilsonb/ma.html (accessed 9/3/11). • The materials are offered to the wider academic and educational community community under a Creative Commons licence: Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License • You should only use the materials for educational, not-for-profit use and you should acknowledge the source in any use.

More Related