html5-img
1 / 57

Systematic Reviews and Meta-Analysis

Systematic Reviews and Meta-Analysis. Methodologies for a new era summer school School of Applied Social Studies, University College Cork 20 June 2011 Dr Paul Montgomery. Aims. 1) Discuss the advantages and main features of systematic reviews 2) Introduce basic principles of meta-analysis

Download Presentation

Systematic Reviews and Meta-Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Systematic Reviews and Meta-Analysis Methodologies for a new era summer school School of Applied Social Studies, University College Cork 20 June 2011 Dr Paul Montgomery

  2. Aims • 1) Discuss the advantages and main features of systematic reviews • 2) Introduce basic principles of meta-analysis • Course feedback

  3. The Problem • Millions of articles published in thousands of journals each year • Practitioners and researchers are busy • Subjective summaries may misrepresent research

  4. Reviews Systematic Reviews Aim to answer specific questions, reduce uncertainty, identify outstanding questions Common methods include narrative synthesis, meta-analysis (meta-regression) Traditional ‘journalistic’ reviews Aim to persuade, draw attention to a topic, synthesise information, etc. Narrative synthesis most common

  5. Systematic Review: “the application of scientific strategies that limit bias to the systematic assembly, critical appraisal, and synthesis of all relevant studies on a specific topic." Cook DJ, Sakett DL, Spitzer WO. Methodological guidelines for systematic reviews of randomized contro trials in health care from the Potsdam Consultation on Meta-Analysis. J. Clin. Epidemiol. 1995;48:167-71

  6. Systematic Reviews • Clear Question • Define the population, problem, intervention, alternative interventions, and outcomes • Replicable Method • Search strategy • Inclusion criteria • Analytical strategy • Transparent Process

  7. Advantages • Explicit methods limit bias in identifying and rejecting studies • Information can be understood quickly • Reduced delay between discoveries and implementation • Results can be formally compared • Heterogeneity can be identified and new hypotheses generated • Quantitative reviews increase precision

  8. Producers • Cochrane • Campbell • EPPI • DARE • NICE • Interested practitioners/ academics

  9. Cochrane Review Process • Register titles and check for overlap • Protocols developed and peer reviewed • Searches performed widely on all main databases, grey literature searches, personal contacts • Abstracts reviewed by two authors • Data collected and trial quality assessed • Data synthesis and analysis • Write-up • Reviewed by Cochrane/ Campbell editors, then peer reviewed

  10. Systematic reviews • Key components: 1. Ask a good question 2. Identify studies 3. Extract data 4. Synthesise data 5. Interpreting the results

  11. Who Should Review? “Experts, who have been steeped in a subject for years and know what the answer ‘ought’ to be, are less able to produce an objective review of the literature in their subject than non-experts. This would be of little consequence if experts' opinions could be relied on to be congruent with the results of independent systematic reviews, but they cannot.” (Trisha Greenhalgh)

  12. PICO Mad-libs For P ____________ does I ____________ compared to C ____________ improve/reduce O ____________ ?

  13. Highly Sensitive Search • Electronic Searches Databases/Indexes Additional Electronic Searches • Hand Searches • Personal Contacts

  14. Electronic databases: PsycInfo covers 73% and has a psychological focus Searching PsycLit and Embase will cover 92% of the core 505 ‘psychiatric’ journals. PsycInfo Embase Medline Embase covers 67% plus lots of European journals that Medline misses. BA Medline covers 23% of the core 505 ‘psychiatric’ journals, plus most of the major biomedical journals. Biological Abstracts covers 48%, plus lots of life sciences stuff.

  15. Electronic Searches • Sensitivity vs. Specificity Even if 2 terms and 3 databases return almost all literature on a subject, the goal of a systematic review is to find everything.

  16. Electronic Searches • Specific Authors • Reverse Citation • Agencies / Non-Profits • Funding Bodies • Academic Groups / Research Centers • Google

  17. Additional Searches • Previous Reviews • Bibliographies of Related Articles • Hand Search Journals (that aren’t indexed) • Conference Reports (many are electronically published)

  18. Personal Communication • Call or Email Authors • Attend Conferences • Write to: Agencies / Non-Profits Providers / Manufacturers / Distributors Funding Bodies Academic Groups / Research Centers

  19. Questions to Ask • Which programs will be studied? • Compared to what? • What study designs are acceptable? • What must a study measure? • How must it be measured? • Must researchers be blind at allocation, during the trial, etc? • How will dropouts be handled? • What about missing data?

  20. Inclusion and Exclusion • Types of studies • Types of participants • Types of comparisons • Specify: • Types of outcomes • Multiplicity (time, comparisons, measures, statistics)

  21. Transparency • Be clear about all definitions, searches, inclusion and exclusion criteria, etc. • Report ongoing trials • List excluded studies, particularly if: • The trials contain valuable information • Exclusion was a close call • You discovered something about a trial

  22. Evaluating a Review • Even if a review is ‘systematic’ it may not be well-conducted. How do we tell the difference?

  23. Validity Did the review address a clearly focussed question? Were the right sort of studies selected? Was the search strategy explicit and comprehensive? Did the reviewers assess the quality of the identified studies?

  24. Importance: Were the results similar from study to study? What is the overall result of the review? How precise are the results?

  25. Potential Sources of Bias • Describe aspects of study design that might have influenced the magnitude or direction of results • Use of rating scales with fixed cut-offs potentially misleading • Consider external validity

  26. Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 1999; 282: 1054-1060

  27. Tower of Babel • Studies that find a treatment effect are more likely to be published in English-language journals. • Opposing studies may be published in non-English-language journals. Gregoire G, Derderan F, Le Lorier J. Selecting the language of the publications included in a meta-analysis: is there a Tower of Babel Bias? J.Clin.Epidemiol. 1995;48:159-163

  28. Publication Bias “the tendency of investigators, reviewers and editors to differentially submit or accept manuscripts for publication on the direction or strength of the study findings.” Cook DJ, Guyatt GH, Ryan G, Clifton J, Buckingham L, Willan A et al. Should unpublished data be included in meta-analyses? Current convictions and controversies. JAMA 1993; 269: 2749-2753

  29. Unpublished data • Controversial • Unpublished data may not be a full or representative sample (Cook 1993) • Publication is no guarantee of scientific quality (Oxman 1991) Cook DJ, Guyatt GH, Ryan G, Clifton J, Buckingham L, Willan A et al. Should unpublished data be included in meta-analyses? Current convictions and controversies. JAMA 1993; 269: 2749-2753 Oxman AD, Guyatt GH, Singer J, Goldsmith CH, Hutchison BG, Milner RA et al. Agreement among reviewers of review articles. J.Clin.Epidemiol. 1991;44:91-98.

  30. Meta-analyses: “A systematic review that employs statistical methods to combine and summarise the results of several studies.” Cook DJ, Sakett DL, Spitzer WO. Methodological guidelines for systematic reviews of randomized contro trials in health care from the Potsdam Consultation on Meta-Analysis. J. Clin. Epidemiol. 1995;48:167-71

  31. Summarising trials Systematic reviews Meta-analyses Reviews

  32. Meta-analyses • Mathematically combine the results of different studies • For dichotomous or continuous outcomes • From analytical (treatment) or observational (aetiology, diagnosis, prognosis) studies • ‘Weighted’ by study size (usually 1/se2) and/or quality

  33. Benefits of meta-analysis: • To increase statistical power for primary end points and for subgroups. • To improve estimates of effect size. • To resolve uncertainty when reports disagree • To answer questions not posed at the start of individual trials. Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analyses of randomized controlled trials. N.Engl.J.Med. 1987;316:450-455

  34. Outcome Measures • Continuous / Dichotomous (/ Ordinal) • Objective / Subjective

  35. Meta-analysis Some outcomes are measured on scales – e.g. depression or continuously e.g. sleep minutes Continuous outcomes can be calculated using the scale on which they were measured (WMD) If changes in depression are measured on different scales it is still possible to combined them but on a standardised scale

  36. Meta-analysis Alternatively we might be interested in binary data - two mutually exclusive states Dead/alive; hospitalised/not hospitalised These data will be measured in a different way to continuous (scale) data Reported as ‘event rates’

  37. Meta-analysis • Central Tendency: • Mean (Cohen’s d, Hedges’s g) • Odds Ratio / Relative Risk / Rate Ratio • Variance (Confidence Interval) • Clinical Significance (NNT/NNH) • Heterogeneity (I2, Q, Chi2)

  38. Dichotomous Outcomes • Odds are calculated by dividing the number of events by non-events (ie clients experiencing the event divided by clients not experiencing an event) • Risk/Rate is more widely reported in reviews as it tends to be easier to communicate

  39. Weighting • Some studies contribute more weight to the ‘average’ result than do others • The more precise the effect estimate, the more weight is given • Wide variation is sometimes associated with small studies

  40. Weighting • Clinical trials are rarely conducted according to identical protocols • Severity of the problem, intensity of the intervention, duration, setting of trial, age may account for differences in response

  41. Apples and oranges? Sources of Heterogeneity: • Study participants • Comparisons • Intervention design • Delivery • Duration of follow-up • Outcome measures • Methods

  42. Heterogeneity • Estimates from individual trials vary more than can be explained by the play of chance alone • N.B. Meta-analysis should NOT overlook important material differences in subgroup response

  43. Heterogeneity – approaches • Qualitative v. quantitative • Qualitative – reconsider pooling • Does it makes sense to average effects from the studies? • Fixed v. random effects

  44. Subgroup Analysis • If together there is excessive variation, when analysed separately there is a uniform response to treatment in each subgroup • Hypothesis generating

  45. Sensitivity analysis: Sensitivity analyses investigate how the conclusions of a review change when one or more of the decisions or assumptions are altered.

  46. Testing for heterogeneity • Look at plots of results • Formal tests of homogeneity • I2 • Q • Chi2 • Assess qualitative differences in study design or implementation .

  47. Anxiety (Self-Rated Symptoms) at Post-Treatment

  48. Institutionalisation (RR<1 favours home visits)

More Related