1 / 20

Meta-analysis: pooling study results

Meta-analysis: pooling study results. Simon Thornley. Objective. Understand the philosophy of meta-analysis and its contribution to epidemiology and science. Understand the limitations of meta-analysis. Introduction.

nhi
Download Presentation

Meta-analysis: pooling study results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Meta-analysis: pooling study results Simon Thornley

  2. Objective • Understand the philosophy of meta-analysis and its contribution to epidemiology and science. • Understand the limitations of meta-analysis

  3. Introduction • Systematic quantitative integration of results several independent studies • Distinct from a narrative review “expert” • Synthesis of published information. • Usuallyconsidered only appropriate for RCTs • Still controversial even in this context. • Google search on “meta-analysis” 8 million hits!

  4. Criticism • “statistical alchemy” for the 21st Century • “The intellectual allure of making mathematical models and aggregating collections of studies has been used as an escape from the more fundamental scientific challenges” • -Feinstein.

  5. Purposes of meta analysis • Inefficiency of traditional narrative reviews. • Allow researchers to keep abreast of accumulating evidence • Resolution of uncertainty when research disagrees? • Increase statistical power, enhances precision of effect estimates – especially small effects • Allows exploratory analysis (subgroups)

  6. Inadequate sample size? (Deal with type-2 error) • Single trials too small to detect moderate effects • (low power – high chance of Type-2 error (false-negative)) • Investigators often over enthusiastic about size of treatment effects and sample size • Meta-analysis doesn’t deal with other threats to study validity (bias, measurement error; in fact, may increase) • e.g. CVD death vs. total mortality

  7. Statistical Test result Accept H0 Reject H0 Type-1 error OK True H0 Type-2 error False OK Prob of a type 1 error = alpha a (usually fixed, say 0.05) Prob of a type 2 error = beta b= 1-power

  8. Random error lecture Average odds ratio is 21?? Consistency??

  9. Which studies? • Need defined question, state MESH terms • Reproducible • Exhaustive search • Unpublished and published studies • Variety of databases.

  10. Typical summary outcome measures Binary: Continuous: • Difference in means, • Standardized differences in means • Survival measures • Relative risk • Odds ratio • Risk difference • NNT [=1/RD] • Incidence rate ratios (person time data)

  11. Methods of analysis Fixed effect Random effect • Assume distribution of true effects • Aim is to measure mean of distribution of true effects • Greater heterogeneity --> greater variation • Gives greater weight to small studies than fixed effect method of analysis. • More conservative (wider confidence interval around effect estimate, compared to fixed effect method) • Mantel-Haenszel method • treat each trial as a “stratum” take weighted average of effects. • O-E (Peto) method • Binary outcome (e.g. death) • Oi=observed # deaths on treatment in trial i • Ei=expected # deaths (assuming no treat effect) • look at average of Oi- Eiover all trials • Assumes underlying true effect for each study and differences only due to random error

  12. Dietary fat and cholesterol

  13. Reduced or modified dietary fat and all-cause mortality

  14. Publication bias

  15. When meta-analysis goes bad… • In CVD drug research, CVD outcomes often favoured over total mortality • Which would you prefer????

  16. Publication bias: other methods • Ioannidis JPA, Trikalinos TA. An exploratory test for an excess of significant findings. Clin. Trials 2007;4(3):245-53. • Calculate expected number of positive studies, given: • Sample size of individual studies • Number of events in controls • Summary effect (assumed true)

  17. Statin meta-analysis

  18. Problems • Combining heterogeneous studies (apples and oranges) • Combining good and bad studies (good and bad apples) (study quality) • Publication bias (tasty apples only) • The "Flat Earth" criticism – reductionism–(Braeburns only) • Combining data (individual v summary data stewed apples have different character to raw) • Application to randomized studies only? • Type-2 error only one problem with epi studies

  19. Meta analysis in observational studies • MA often applied in observational studies • As often as RCTs (Egger et al) • …. with controversy …. • Confounding and bias unlikely to “cancel out” • Publication bias and “research initiation bias” (i.e. studies only done when there is an association) • Different ways of reporting/analysing result (e.g different outcome measures, confounders, models, exposure levels)

  20. Summary • Meta-analyses increasingly used • Logical only for RCTs? • Summarise medical literature • Reduce type-2 error by increasing sample size. • Don’t deal with other types of epidemiological error (confounding/measurement error) • Prone to unique type of error (Publication bias) • Can be difficult to detect

More Related