1 / 54

Quantitative Synthesis I

This training module provides an overview of quantitative synthesis methods, including meta-analysis, for conducting systematic reviews. Learn how to combine data, recognize common metrics, and understand statistical heterogeneity.

schlueter
Download Presentation

Quantitative Synthesis I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quantitative Synthesis I Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide www.ahrq.gov

  2. Systematic Review Process Overview

  3. Learning Objectives • To list the basic principles of combining data • To recognize common metrics for meta-analysis • To describe the role of weights to combine results across studies • To distinguish between clinical and methodological diversity and statistical heterogeneity • To define fixed effect model and random effects model

  4. Synonyms for Meta-Analysis • Quantitative overview/synthesis • Pooling • Less precise • Suggests that data from multiple sources are simply lumped together • Combining • Preferred by some • Suggests applying statistical procedures to data

  5. Reasons To Conduct Meta-Analyses • Improve the power to detect a small difference if the individual studies are small • Improve the precision of the effect measure • Compare the efficacy of alternative interventions and assess consistency of effects across study and patient characteristics • Gain insights into statistical heterogeneity • Help to understand controversy arising from conflicting studies or generate new hypotheses to explain these conflicts • Force rigorous assessment of the data

  6. Commonly EncounteredComparative Effect Measures

  7. Principles of Combining Datafor Basic Meta-Analyses • For each analysis, one study should contribute only one treatment effect. • The effect estimate may be for a single outcome or a composite. • The outcome being combined should be the same — or similar, based on clinical plausibility — across studies. • Know the research question. The question drives study selection, data synthesis, and interpretation of the results.

  8. Things To Know About the DataBefore Combining Them • Biological and clinical plausibility • Scale of effect measure • Studies with small numbers of events do not give reliable estimates

  9. True Associations May DisappearWhen Data Are Combined Inappropriately

  10. An Association May Be SeenWhen There Is None

  11. Changes in the Same ScaleMay Have Different Meanings • Both A–B and C–D involve a change of one absolute unit • A–B change (1 to 2) represents a 100% relative change • C–D change (7 to 8) represents only a 14% relative change

  12. Effect of the Choice of Metric on Meta-analysis

  13. Effect of Small Changes on the Estimate

  14. Binary Outcomes • Outcomes that have two states (e.g., dead or alive, success or failure) • The most common type of outcome reported in clinical trials • 2x2 tables commonly used to report binary outcomes

  15. A Sample 2x2 Table Binary outcomes data to be extracted from studies ISIS-2 Collaborative Group. Lancet 1988;2:349-60.

  16. Treatment Effect Metrics ThatCan Be Calculated From a 2x2 Table OR = (a / b) / (c / d)

  17. Some Characteristicsand Uses of the Risk Difference • Value ranges from -1 to +1 • Magnitude of effect is directly interpretable • Has the same meaning for the complementary outcome (e.g., 5% more people dying is 5% fewer living) • Across studies in many settings, tends to be more heterogeneous than relative measures • Inverse is the number needed to treat (NNT) and may be clinically useful • If heterogeneity is present, a single NNT derived from the overall risk difference could be misleading

  18. Some Characteristicsand Uses of the Odds Ratio • Value ranges from 1/oo to + • Has desirable statistical properties; better normality approximation in log scale than risk ratio • Symmetrical meaning for complementary outcome (the odds ratio of dying is equal to the opposite [inverse] of the odds ratio of living) • Ratio of two odds is not intuitive to interpret • Often used to approximate risk ratio (but gives inflated values at high event rates)

  19. Some Characteristicsand Uses of the Risk Ratio • Value ranges from 0 to  • Like its derivative, relative risk reduction, is easy to understand and is preferred by clinicians • Example: a risk ratio of 0.75 is a 25% relative reduction of the risk • Requires a baseline rate for proper interpretation • Example: an identical risk ratio for a study with a low event rate and another study with higher event rate may have very different clinical and public health implications • Asymmetric meaning for the complementary outcome • Example: the risk ratio of dying is not the same as the inverse of the risk ratio of living

  20. When the Complementary Outcomeof the Risk Ratio Is Asymmetric • Odds Ratio (Dead) = 20 x 60 / 40 x 80 = 3/8 = 0.375 • Odds Ratio (Alive) = 80 x 40 / 20 x 60 = 8/3 = 2.67 • Risk Ratio (Dead) = 20/100 / 40/100 = 1/2 = 0.5 • Risk Ratio (Alive) = 80/100 / 60/100 = 4/3 = 1.33

  21. Calculation of Treatment Effects in the Second International Study of Infarct Survival (ISIS-2) • Treatment-Group Effect Rate = 791 / 8592 = 0.0921 • Control-Group Effect Rate = 1029 / 8595 = 0.1197 • Risk Ratio = 0.0921 / 0.1197 = 0.77 • Odds Ratio = (791 x 7566) / (1029 x 7801) = 0.75 • Risk Difference = 0.0921 – 0.1197 = -0.028 ISIS-2 Collaborative Group. Lancet 1988;2:349-60.

  22. Treatment Effects Estimates in Different Metrics:Second International Study of Infarct Survival (ISIS-2) ISIS-2 Collaborative Group. Lancet 1988;2:349-60.

  23. Example: Meta-Analysis Data Set Beta-Blockers after Myocardial Infarction - Secondary Prevention Experiment Control Odds 95% CI N Study Year Obs Tot Obs Tot Ratio Low High === ============ ==== ====== ====== ====== ====== ===== ===== =====   1 Reynolds 1972 3 38 3 39 1.03 0.19 5.45 2 Wilhelmsson 1974 7 114 14 116 0.48 0.18 1.23 3 Ahlmark 1974 5 69 11 93 0.58 0.19 1.76 4 Multctr. Int 1977 102 1533 127 1520 0.78 0.60 1.03 5 Baber 1980 28 355 27 365 1.07 0.62 1.86 6 Rehnqvist 1980 4 59 6 52 0.56 0.15 2.10 7 Norweg.Multr 1981 98 945 152 939 0.60 0.46 0.79 8 Taylor 1982 60 632 48 471 0.92 0.62 1.38 9 BHAT 1982 138 1916 188 1921 0.72 0.57 0.90 10 Julian 1982 64 873 52 583 0.81 0.55 1.18 11 Hansteen 1982 25 278 37 282 0.65 0.38 1.12 12 Manger Cats 1983 9 291 16 293 0.55 0.24 1.27 13 Rehnqvist 1983 25 154 31 147 0.73 0.40 1.30 14 ASPS 1983 45 263 47 266 0.96 0.61 1.51 15 EIS 1984 57 858 45 883 1.33 0.89 1.98 16 LITRG 1987 86 1195 93 1200 0.92 0.68 1.25 17 Herlitz 1988 169 698 179 697 0.92 0.73 1.18

  24. Simpson’s Paradox (I) • A 1986 study by Charig et al. compared the treatment of renal calculi by open surgery and percutaneous nephrolithotomy. • The authors reported that success was achieved in 78% of patients after open surgery and in 83% after percutaneous nephrolithotomy. • When the size of the stones was taken into account, the apparent higher success rate of percutaneous nephrolithotomy was reversed. Charig CR, et al. BMJ 1986;292:879-82.

  25. Simpson’s Paradox (II) Stones < 2 cm Stones ≥ 2 cm Open (73%) > PN (69%) Open (93%) > PN (87%) Pooling Tables 1 and 2 Open (78%) < PN (83%) Charig CR, et al. BMJ 1986;292:879-82. PN = percutaneous nephrolithotomy

  26. Combining Effect Estimates What is the average (overall) treatment-control difference in blood pressure?

  27. (-6.2) + (-7.7) + (-0.1) -4.7 mm Hg = 3 Simple Average What is the average (overall) treatment-control difference in blood pressure?

  28. (554 x -6.2) + (304 x -7.7) + (39 x -0.1) -6.4 mm Hg = 554 + 304 + 39 Weighted Average What is the average (overall) treatment-control difference in blood pressure?

  29. General Formula:Weighted Average Effect Size (d+) Where: di = effect size of the ith study wi = weight of the ith study k = number of studies

  30. Calculation of Weights • Generally is the inverse of the variance of treatment effect (that captures both study size and precision) • Different formula for odds ratio, risk ratio, and risk difference • Readily available in books and software

  31. Heterogeneity (Diversity) • Is it reasonable? • Are the characteristics and effects of studies sufficiently similar to estimate an average effect? • Types of heterogeneity: • Clinical diversity • Methodological diversity • Statistical heterogeneity

  32. Clinical Diversity • Are the studies of similar treatments, populations, settings, design, et cetera, such that an average effect would be clinically meaningful?

  33. Example: A Meta-analysis With aLarge Degree of Clinical Diversity • 25 randomized controlled trials compared endoscopic hemostasis with standard therapy for bleeding peptic ulcer. • 5 different types of treatment were used: monopolar electrode, bipolar electrode, argon laser, neodymium-YAG laser, and sclerosant injection. • 4 different conditions were treated: active bleeding, a nonspurting blood vessel, no blood vessels seen, and undesignated. • 3 different outcomes were assessed: emergency surgery, overall mortality, and recurrent bleeding. Sacks HS, et al. JAMA 1990;264:494-9.

  34. Methodological Diversity • Are the studies of similar design and conduct such that an average effect would be clinically meaningful?

  35. Statistical Heterogeneity • Is the observed variability of effects greater than that expected by chance alone? • Two statistical measures are commonly used to assess statistical heterogeneity: • Cochran’s Q-statistics • I2 index

  36. Cochran’s Q-Statistics:Chi-square (2) Test for Homogeneity Q-statistics measure between-study variation. di = effect measure; d+ = weighted average

  37. The I2 Index and Its Interpretation • Describes the percentage of total variation in study estimates that is due to heterogeneity rather than to chance • Value ranges from 0 to 100 percent • A value of 25 percent is considered to be low heterogeneity, 50 percent to be moderate, and 75 percent to be large • Is independent of the number of studies in the meta-analysis; it could be compared directly between meta-analyses Higgins JP, et al. BMJ 2003;327:557-60.

  38. Example: A Fixed Effect Model • Suppose that we have a container with a very large number of black and white balls. • The ratio of white to black balls is predetermined and fixed. • We wish to estimate this ratio. • Now, imagine that the container represents a clinical condition and the balls represent outcomes.

  39. Random Sampling From a Container With a Fixed Number of White and Black Balls (Equal Sample Size)

  40. Random Sampling From a Container With a Fixed Number of Black and White Balls (Different Sample Size)

  41. Different Containers With Different Proportions of Black and White Balls (Random Effects Model)

  42. Random Sampling From Containers To Get an Overall Estimate of the Proportion of Black and White Balls

  43. Statistical Models of Combining 2x2 Tables • Fixed effect model: assumes a common treatment effect. • For inverse variance weighted method, the precision of the estimate determines the importance of the study. • The Peto and Mantel-Haenzel methods are noninverse variance weighted fixed effect models. • Random effects model: in contrast to the fixed effect model, accounts for within-study variation. • The most popular random effects model in use is the DerSimonian and Laird inverse variance weighted method, which calculates the sum of the within-study variation and the among-study variation. • Random effects model can also be implemented with Bayesian methods.

  44. Example Meta-analysis Where Fixed and the Random Effects Models Yield Identical Results

  45. Example Meta-analysis Where Results from Fixed and Random Effects Models Will Differ Gross PA, et al. Inn Intern Med 1995;123:518-27. Reprinted with permission from the American College of Physicians.

  46. Weights of the Fixed Effectand Random Effects Models Random Effects Weight Fixed Effect Weight where: vi = within study variance v* = between study variance

  47. Commonly Used Statistical Methodsfor Combining 2x2 Tables

  48. Dealing With Heterogeneity Lau J, et al. Ann Intern Med 1997;127:820-6.Reprinted with permission from the American College of Physicians.

  49. Summary:Statistical Models of Combining 2x2 Tables • Most meta-analyses of clinical trials combine treatment effects (risk ratio, odds ratio, risk difference) across studies to produce a common estimate, by using either a fixed effect or random effects model. • In practice, the results from using these two models are similar when there is little or no heterogeneity. • When heterogeneity is present, the random effects model generally produces a more conservative result (smaller Z-score) with a similar estimate but also a wider confidence interval; however, there are rare exceptions of extreme heterogeneity where the random effects model may yield counterintuitive results.

  50. Caveats • Many assumptions are made in meta-analyses, so care is needed in the conduct and interpretation. • Most meta-analyses are retrospective exercises, suffering from all the problems of being an observational design. • Researchers cannot make up missing information or fix poorly collected, analyzed, or reported data.

More Related