1 / 71

Undertaking a Quantitative Synthesis

Undertaking a Quantitative Synthesis. Steve Higgins, Durham University Robert Coe, Durham University Mark Newman, EPPI Centre, IoE, London University James Thomas, EPPI Centre, IoE, London University Carole Torgerson, IEE, York University. Acknowledgements.

berg
Download Presentation

Undertaking a Quantitative Synthesis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Undertaking a Quantitative Synthesis Steve Higgins, Durham University Robert Coe, Durham University Mark Newman, EPPI Centre, IoE, London University James Thomas, EPPI Centre, IoE, London University Carole Torgerson, IEE, York University

  2. Acknowledgements • This presentation is an outcome of the work of the ESRC-funded Researcher Development Initiative: “Training in the Quantitative synthesis of Intervention Research Findings in Education and Social Sciences” which ran from 2008-2011. • The training was designed by Steve Higgins and Rob Coe (Durham University), Carole Torgerson (Birmingham University) and Mark Newman and James Thomas, Institute of Education, London University. • The team acknowledges the support of Mark Lipsey, David Wilson and Herb Marsh in preparation of some of the materials, particularly Lipsey and Wilson’s (2001) “Practical Meta-analysis” and David Wilson’s slides at: http://mason.gmu.edu/~dwilsonb/ma.html (accessed 9/3/11). • The materials are offered to the wider academic and educational community community under a Creative Commons licence: Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License • You should only use the materials for educational, not-for-profit use and you should acknowledge the source in any use.

  3. Background • Training funded by the ESRC’s “Researcher Development Initiative” • Collaboration between the Universities of Durham, York and the Institute of Education, University of London

  4. National initiative • Level 1 and 2 • Round 1:Durham, Edinburgh London (2008-9) • Round 2: Belfast, York, Cardiff (2009-10) • Level 3 • Mark Lipsey, Edinburgh, 16th March 2010 • Larry Hedges, London, 7th June 2010 • Workshop at RCT Conference, York, September • Doctoral support • British Educational Research Association (BERA) student conferences • Website and resource materials

  5. Overall aims • To support understanding of meta-analysis of intervention research findings in education and social sciences more broadly; • To develop understanding of reviewing quantitative research literature; • To describe the techniques and principles of meta-analysis involved to support understanding of its benefits and limitations; • To provide references and examples to support further work.

  6. Learning outcomes for Level 1 • To understand the role of research synthesis in identifying messages about ‘what works’ from intervention research findings • To understand the concept of effect size as a metric for comparing intervention research findings • To be able to read and understand a forest plot of the results • To be able to read a meta-analysis of intervention research findings, interpret the results, and draw conclusions.

  7. Learning outcomes for level 2 • To be able to identify and select relevant quantitative data from a published report which can be used to calculate effect sizes for meta-analysis; • To be able to calculate an effect size from commonly found continuous (and clustered) data; • To recognise when it is appropriate to combine individual effect sizes; • To identify possible solutions to cope with heterogeneity; • To be able to display and interpret the results of a meta-analysis.

  8. Overview of the day 10.00 Arrival/ Registration/ Coffee 10.15 Introduction and overview Identifying data for synthesis Calculating effect sizes 12.30 Lunch 1.30 Combining effect sizes Assessing and coping with heterogeneity 3.00 Break 3.30 Overview of software for meta-analysis Summary, discussion and evaluation 4.00 Finish

  9. Introductions • Introduce yourself to those next to you • What is your interest in meta-analysis? • What experience have you in this area?

  10. Meta-analysis as synthesis • Quantitative data from • Experimental research studies • Correlational research studies • Based on a systematic review • Methodological assumptions from quantitative approaches (both epistemological and mathematical) • Hypothesis testing & exploration

  11. Key issues about reviews and evidence • Applicability of the evidence to the question • Breadth • Scope • Scale • Robustness of the evidence • Research quality

  12. Session key assumption • We have found a group of studies that meet our inclusion criteria, that is they evaluate the effectiveness of a similar intervention and measure outcome(s) • How do we combine the results ?

  13. Stages of synthesis What data are available? By addressing review question according to conceptual framework What are the patterns in the data? Including study, intervention, outcomes and participant characteristics What is the question? Theories and assumptions in the review question How does integrating the data answer the question? To address the question (including theory testing or development). Can the conceptual framework be developed? What new research questions emerge? How robust is the synthesis? For quality, sensitivity, coherence & relevance. What does the result mean? (conclusions) What is the result? Cooper, H.M. (1982) Scientific Guidelines for Conducting Integrative Research Reviews Review Of Educational Research 52; 291 See also: Popay et al. (2006) Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. Lancaster: Institute for Health Research, Lancaster University. http://www.lancs.ac.uk/fass/projects/nssr/research.htm

  14. Calculating effect sizes • The difference between the two means, expressed as a proportion of the standard deviation ES = (Me – Mc) / SD • Cohen's d • Glass's Δ • Hedges' g

  15. Practical 1 Calculating effect sizes based on standardised mean differences • Basic calculation • Extracting data and using a web-based tool • Investigating the effect size • Identifying data from a paper • Converting other data

  16. 1a) Calculating an effect size The intervention group’s average score was 28.5, the control group’s was 26.5, the pooled standard deviation was 4. What was the effect size?

  17. 1b) ‘Early Steps’ Log in to: http://eppi.ioe.ac.uk/EPPIReviewer4/EppiReviewer4TestPage.html • Username: meta • Password: analysis

  18. 1. To be able to identify and select relevant quantitative data from a published report that can be used to calculate effect sizes for meta-analysis Which effect? Which one is appropriate for your meta-analysis? Greaney, K., Tunmer, W., & Chapman, J. (1997). The effects of rime-based orthographic analogy training on the word recognition skills of children with reading disability. Journal of Educational Psychology 89, 645-651.

  19. 2. To be able to calculate an effect size from commonly found continuous (and clustered) data 1c) Skim through the Greaney et al. (1997) paper. Imagine you are conducting a systematic review of the impact of interventions on reading. Work out the effect size which you think best shows whether the rime-based training is effective.

  20. Calculating Effect Sizes • Direct calculation based on means and standard deviations • Algebraically equivalent formulas (t-test, SE) • Exact probability value for a t-test • Approximations based on continuous data (correlation coefficient) • Estimates of the mean difference (adjusted means, regression B weight, gain score means) • Estimates of the pooled standard deviation (gain score standard deviation, one-way ANOVA with 3 or more groups, ANCOVA) • Approximations based on dichotomous data Equivalent Approximate Estimates

  21. Using other data • Converting Standard Error to Standard Deviation SD = SE × √n So if the sample size (n) is 64 and the SE is 0.2, what is the SD?

  22. Conversion • Key issue – is the source data comparable? • Lipsey and Wilson (2001) formulae • Meta-analysis software • Spreadsheet on data stick • Open the spreadsheet ES_converter.xls

  23. Lunch

  24. Recap of outcomes • Identify and select relevant quantitative data to calculate effect sizes for meta-analysis; • Calculate an effect size from commonly found continuous (and clustered) data; • To recognise when it is appropriate to combine individual effect sizes; • To identify possible solutions to cope with heterogeneity; • To be able to display and interpret the results of a meta-analysis.

  25. Running and exploring a meta-analysis • Practical 2a: Running a meta-analysis

  26. Identifying and exploring heterogeneity • Key issues • Statistical • Educational • Role of quality • Lumpers and splitters

  27. Assessing between study heterogeneity • When effect sizes differ consistent with chance error, the effect size estimate is considered to be homogeneous (unique ‘true’ effect). • When the variability in effect sizes is greater than expected by chance, the effects are considered to be heterogeneous • The presence of heterogeneity affects the process of the meta-analysis • What does this mean? Review

  28. Heterogeneity Heterogeneity chi-squared = 41.74 (df = 11), p<0.0001; Q statistic 46.3, p< 0.001; I2= 76.24%

  29. Sub-divided by learner characteristics

  30. Sub-divided by intention to teach

  31. Statistical methods to identify heterogeneity • Visual • Presence • Q statistic (Cooper & Hedges, 1994) • Significance level (p-value) • 2 • 2 • Extent • I2 (Higgins & Thompson, 2002) • If it exceeds 50%, it may be advisable not to combine the studies All have low power with a small number of studies (Huedo-Medina et al. 2006) Review

  32. To recognise when it is appropriate to combine individual effect sizes (also solutions to heterogeneity – the ability to create a homogenous set of effect sizes?) • Educational heterogeneity • What educational features might explain variation • Pupil age, sex, attainment • Teacher • Intervention • Interpretation

  33. Exploring heterogeneity • Practical task 2b: Heterogeneity

  34. ‘Pooling’ the results • In a meta-analysis, the effects found across studies are combined or ‘pooled’ to produce a weighted average effect of all the studies - the summary effect. • Each study is weighted according to some measure of its importance. • In most meta-analyses, this is achieved by giving a weight to each study in inverse proportion to the variance of its effect.

  35. Which model? • ‘Fixed effect’ and ‘random effects’ models based on different statistical assumptions • The choice of model is determined by how much heterogeneity there is. • Fixed effect if the the studies are relatively homogeneous. • Random effects there is significant heterogeneity between study results.

  36. Fixed effect model • The difference between the studies is due to chance • Observed study effect = Fixed effect + error Key assumption: each study is from a distribution of studies which all estimate the same overall effect, but differ due to random error

  37. Inverse Variance Weighting • Problem • Sample sizes in studies vary • Larger studies are assumed to provide a better estimate of effect so should be more important in the synthesis and carry more “weight” than smaller studies • Solution • Simple approach: weight each ES by its sample size. • Better approach: weight by the inverse variance.

  38. Inverse variance weight: how is it calculated? • The standard error (SE) is directly related to ES precision • SE is used to create confidence intervals • The smaller the SE, the more precise the ES • Hedges’ showed that the optimal weights for meta-analysis are:

  39. Inverse Variance Weight formula For Standardized Mean Differences:

  40. Random effects model Assumes there are two component of variation • Due to differences within the studies (e.g. different design, different populations, variations in the intervention, different implementation, etc.) • Due to sampling error

  41. Random effects model Each study is seen as representing the mean of a distribution of studies There is still a resultant overall effect size Key assumption: each study is from a distribution of studies which all estimate the same overall effect, but differing due to random error

  42. Fixed and random effects models Fixed effects model - weights each study by the inverse of the sampling variance. Random effects model - weights each study by the inverse of the sampling variance plus the variability across the population effects. Where this is the random effects variance component

  43. Combining effect sizes: running a meta-analysis • Practical task 2c: ‘Models’ • Random and fixed effects models - focus on consequences – interpretation • Sensitivity analysis – subgroup analysis – as solutions to educational heterogeneity

  44. Impact of using Fixed Effect or Random Effects on a meta-analysis • Impact on significance levels and confidence intervals • Confidence intervals will be greater with random effects model • Significant pooled ES under a fixed effect model may not be significant with the random effects model • Random effects models are therefore considered more conservative

  45. What is publication bias? • Publication bias occurs when there are systematic differences in conclusions between studies that are unpublished compared with those that are published. • Usually unpublished data are more likely to be ‘negative’ about an intervention than studies that are published.

  46. How can we detect publication bias? • One simple approach is through the use of a ‘funnel plot’. • This is a graph where all the effect sizes are plotted on an x-axis whilst the size of the study (N) or the standard error (SE) is on the y-axis. • If there is NO publication bias the plots will form an ‘inverted funnel’.

  47. Review of Adult Literacy Teaching Torgerson, Porthouse & Brooks. JRR, 2003

  48. Review of Phonics Instruction

  49. Funnel plots Assumptions: larger studies are more likely to be accurate smaller studies will be more widely scattered publication bias will lead to asymmetry

More Related