1 / 30

Evidence-Based Medicine Ambulatory Care Block 2008-2009

Evidence-Based Medicine Ambulatory Care Block 2008-2009 Study Design: Systematic Reviews & Meta Analyses John H. Choe, MD, MPH Jenny Wright, MD Scott Steiger, MD Sherry Dodson, MLS. Topics. Recap from previous week Introduction to randomized controlled trials

annot
Download Presentation

Evidence-Based Medicine Ambulatory Care Block 2008-2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evidence-Based Medicine Ambulatory Care Block 2008-2009 Study Design: Systematic Reviews & Meta Analyses John H. Choe, MD, MPH Jenny Wright, MD Scott Steiger, MD Sherry Dodson, MLS

  2. Topics • Recap from previous week • Introduction to randomized controlled trials • Statistical measures of excess risk • Introduction to systematic reviews and meta-analyses • Two examples from the literature • Assessing validity, importance, and applicability • Brief overview of case-control and cohort studies

  3. Steps to Practicing EBM • Convert the need for information into an answerable question. • Track down the best evidence with which to answer that question. • Critically appraise the evidence for its validity, impact, and applicability. • Share this evidence with our colleagues and our patients. 5. Integrate the evidence with our clinical expertise and our patient’s characteristics and values.

  4. Evidence Pyramid pyramid modified from: Navigating the Maze, University of Virginia, Health Sciences Library

  5. Judging Studies VALIDITY IMPORTANCE APPLICABILITY

  6. Intervention: Randomized Trials Poor outcome (a) Exposed to factor • Random assignment determines each subject’s exposure status • Allows direct estimation of incidence in exposed vs. non-exposed groups • Outcome events of interest occur after study initiated– hence, randomized trials are prospective • Generally considered strongest design to establish causal relationships because superior control over confounding factors, including those unknown Good outcome (b) Source population Random allocation At risk Poor outcome (c) Not exposed to factor Not at risk (excluded) Good outcome (d)

  7. Measuring Differences Analyzing differences between groups Count data: chi-squared (χ2) test Continuous variables: Testing differences between means 2 groups: t-test ≥ 2 groups: ANOVA (ANalysis Of Variance) Statistically significant when p<0.05 P<0.05 = less than a 5% probability that the difference between groups was from chance rather than “real” Equivalent concept if 95% CI does not include 1

  8. Terms to Describe Excess Risk Relative Risk Relative Risk Reduction Absolute Risk Reduction (= Risk Difference= Attributable Risk= Absolute Diff.) Drop in percent risk for disease due to intervention = (rate in exposed–rate in nonexposed) NNT / NNS / NNH: No. needed to Treat/Screen/Harm No. of patients who must be treated / screened to prevent 1 disease outcome (e.g. cancer or death) 1 / (rate in untreated group – rate in treated group) 1 / (Absolute Risk Reduction)

  9. Math: Calculating Excess Risk • Imagine that 20% of placebo group get disease, compared to 15% of drug group– • Risk Difference (Absolute Risk Reduction)? • 0.20 – 0.15 = 0.05 • Relative Risk is? • 0.15 / 0.20 = 0.75 • Relative Risk Reduction is? • (1 –RR) = (1 –0.75) = 0.25, or 25% • Number needed to treat to prevent 1 event? • 1 / (ARR) = 1/(0.05) = 20

  10. Math Equivalent ≠ Equally Important Strauss et al

  11. Putting it all together: RCTs • Does this RCT apply to our specific question? • Are we satisfied with the randomization and blinding of participants (and study staff)? • Was follow-up reasonably complete, and can we account for all enrolled participants and their outcomes? • Intention to treat analysis? • Was there a statistically significant difference favoring a treatment/ intervention/ screening? • Were these results clinically significant and applicable to my patient and my problem?

  12. Systematic Review • Defined: Summary of the medical literature that uses explicit methods to systematically perform a comprehensive literature search, critically appraise, and synthesize the world literature on a specific topic • Investigators do not recruit individual subjects, but instead perform a comprehensive literature search to find relevant articles– An investigation whose “subjects” are other studies • Contrast: Traditional literature reviews usually do not try to synthesize study results or are not always comprehensive • Clear clinical question is still the key to a well-designed systematic review! • Although many systematic reviews will try to summarize results from the included reviewed studies, not all will try to combine results statistically

  13. Meta-Analysis • Defined: A systematic review that uses quantitative statistical methods to synthesize and summarize the results. • Just as in other systematic reviews: A meta-analysis has other studies as “subjects”, and finding other studies must be done comprehensively • Goals are to minimize bias and random error while potentially increasing study power by considering a wide range of studies on a particular question

  14. Systematic Reviews Meta-Analysis

  15. Putting it all together: Meta-analysis and Systematic Review Does this SR/MA apply to our specific question? Are we satisfied of the inclusion/ exclusion criteria of studies chosen, and are we satisfied of the completeness of their search for literature? Looking at the included studies, were results similar across different studies? Why not? For SR: Were the studies “judged” according to certain criteria? For MA: Were there formal tests of heterogeneity, and was the correct statistical method used for combining results? Was there a statistically significant difference favoring a treatment/ intervention/ screening? Were these results clinically significant and applicable to my patient and my problem?

  16. Assessing Validity: Inclusion • For therapy questions: Is the systematic review/ meta-analysis of randomized trials? • If non-randomized trial are included they should be analyzed both separately and combined with the RCTs to assure that their “inferior” design (overall more subject to bias) is not changing the overall results. • Does it describe a comprehensive & detailed search for relevant studies? Have they avoided obvious publication bias? • PubMed search terms (Can try their search terms!) • Conference presentations/ abstracts • Databanks of pharmaceutical companies • Federal clinical trial registries • Contacting authors of published articles • Non-English sources

  17. Assessing Validity: Study Quality • Were individual studies assessed for validity? • RCT “Table 1”= summary of participant population; allows us to assess applicability of subjects to our question and to our own patients • SR/MA “Table 1”= summary of studies selected; allows us to evaluate applicability of selected studies to our question, and whether they were similar to one another • Were individual patient data (or aggregate data) analyzed? • Important when it comes to looking at subgroups: • If the meta-analysis has the individual patient data they can use this large group of patients created by combining all the studies into subgroups • If the meta-analysis uses aggregate data the validity of combining subgroups from different studies is questionable

  18. Assessing Importance: Consistency • Are results consistent across studies? • Qualitatively, are results in the “same direction” across studies? If not, can we find reasons for differences? • Quantitatively, are the inconsistencies in results from studies explained by chance alone, or is there statistically significant heterogeneity * among studies? • If statistically significant (p<0.05) heterogeneity, meta-analysis should use “random effects” model for statistics. • If not statistically significant heterogeneity, can either use “random effect” or “fixed effects” models for statistics.

  19. Assessing Importance: Effect Size • What is the magnitude of treatment effect? • Pooled studies’ results usually presented as odds ratios (OR) or relative risks (RR); less commonly as number needed to treat (NNT), to harm (NNH), or to screen (NNS) • Pooled individuals’ results usually presented as hazard ratios (HR), or OR. • How precise is the estimate of treatment effect? • Does the 95% confidence interval “cross” 1?

  20. Reporting Meta-Analysis Effect Sizes • Total mortality from trials of β blockers in secondary prevention after MI • Black square= OR for each RCT. • Size ≈ “weight” of each RCT. • Horizontal line=95% CI for each RCT • Diamond= combined OR + 95% CI • 22% reduction in odds of death • This type of display usual for meta-analyses of studies, not those that pool individual participants from studies Egger, M. et al. BMJ 1997;315:1533-1537

  21. Test of heterogeneity Fixed effects model 1.0 1.76 1.33 p=0.013 1.44 0.69 3.00 Random effects model Statistical Models and Precision ↓ DM with BB’s RR=1 ↑ DM with BB’s Bangalore S et al. A meta-analysis of 94,492 patients with hypertension treated with beta blockers to determine the risk of new-onset diabetes mellitus. Am J Cardiol 2007;100:1254-1262

  22. Assessing Applicability • Is our patient so different from those in the study that the results cannot apply? • Is the treatment/ intervention/ screening feasible in our clinic? • What are the potential benefits and harms from this? • What are our patient’s values and expectations for the outcome we are trying to prevent, and the adverse events we might cause with this treatment/ intervention/ screening?

  23. Putting it all together: Meta-analysis and Systematic Review • Does this SR/MA apply to our specific question? • Are we satisfied of the inclusion/ exclusion criteria of studies chosen, and are we satisfied of the completeness of their search for literature? • Looking at the included studies, were results similar across different studies? Why not? • For SR: Were the studies “judged” according to certain criteria? • For MA: Were there formal tests of heterogeneity, and was the correct statistical method used for combining results? • Was there a statistically significant difference favoring a treatment/ intervention/ screening? • Were these results clinically significant and applicable to my patient and my problem?

  24. Alternative View of Study Design Descriptive Studies Hypothesis-generating Analytic Studies Hypothesis-testing • Case report/ case series • Descriptive epidemiology Intervention Studies • Qualitative studies • Quasi- • experiments • Randomized trials Observational Studies • Cost-benefit/ -effectiveness • Case-control • Cohort • Cross-sectional • Meta-analyses • Retrospective • Prospective Adapted from Kopesell & Weiss. Epidemiologic Methods: Studying the occurrence of illness

  25. Observational: Cohort Study Poor outcome (a) Exposed to factor • Cohorts (study groups) are formed on basis of exposure status at beginning of follow-up period • Group sizes do NOT need to reflect true exposure prevalence • Allows direct estimation of incidence in exposed vs. non-exposed groups • Prospective cohort: outcomes of interest occur after study start • Retrospective (historical) cohort: outcomes have already occurred by the time of the study start Good outcome (b) Source population At risk Poor outcome (c) Not exposed to factor Not at risk (excluded) Good outcome (d)

  26. Observational: Cohort Study Poor outcome (a) Exposed to factor • Strengths: • Exposure is known to precede outcomes • Can directly measure incidence of adverse outcomes • Can measure multiple outcomes of a single exposure • Weaknesses: • Inefficient for rare/ delayed outcomes • Expensive Good outcome (b) Source population At risk Poor outcome (c) Not exposed to factor Not at risk (excluded) Good outcome (d)

  27. Observational: Case-Control Study Previously exposed (a) Diseased (Cases) • Subjects selected on basis of disease status (case vs. control) • Relative number of cases and controls in study does not reflect true frequency (usually selected around 1:3 or 1:4) in population • Therefore, cannot provide direct estimate of incidence in exposed vs. nonexposed persons; only provides information about relative incidence comparing the two groups • Odds ratio (= a/c ÷ b/d = ad/bc) usually is a reasonable estimate of relative risk or risk ratio if rare outcome Not previously exposed (c) Previously exposed (b) Non-diseased (Controls) Not previously exposed (d)

  28. Observational: Case-Control Study Previously exposed (a) Diseased (Cases) • Strengths: • Efficient (cheap) for rare outcomes, or if long time for disease • Can study multiple causes of an outcome of interest using the same two comparison groups • Weaknesses: • Can’t directly estimate incidence of a bad outcome in exposed vs. non-exposed, and can’t measure differences in disease rates between groups • Selecting appropriate controls can be difficult • Self-reported exposure data: recall bias Not previously exposed (c) Previously exposed (b) Non-diseased (Controls) Not previously exposed (d)

More Related