1 / 40

Critical appraisal of (Systematic review) Meta-analysis

Critical appraisal of (Systematic review) Meta-analysis. 羅政勤 彰化秀傳紀念醫院. Objectives. To understand the different terminology of Meta-analysis, systematic review, To understand the key criteria for critical appraisal

adin
Download Presentation

Critical appraisal of (Systematic review) Meta-analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Critical appraisal of (Systematic review)Meta-analysis 羅政勤 彰化秀傳紀念醫院

  2. Objectives • To understand the different terminology of Meta-analysis, systematic review, • To understand the key criteria for critical appraisal • To select an appropriate checklist or other instrument to use for critical appraisal. Validity, Impact, Practicability (CASP)

  3. Terminology • Review: ≧2 publication synthesise results + conclusions • Overview(systematic literature review): a review strives to comprehensively identify and track down all literature on a given topic • Meta-analysis: Specific statistical strategy assembling results of several studies into a single estimate

  4. Introduction • Systematic reviews form a potential method for overcoming the barriers faced by clinicians when trying to access and interpret evidence to inform their practice

  5. Systematic reviews • Concise summaries of best available evidence that addresses defined questions • scientific tool used to appraise, summarise, and communicate results and implications of otherwise unmanageable quantities of research

  6. Systematic reviews • Defining a question • A good question will have four components: • Type of person involved • Type of exposure • Type of control • Outcomes

  7. SR and Meta-analysis • Systematic reviews may or may not include a statistical synthesis called meta-analysis, • whether the studies are similar enough so that combining their results is meaningful

  8. Meta-analysis • Statistical method for combining the results of trials • Most appropriate for randomized trials • May also be appropriate for observational studies

  9. Results of a meta-analysis Forest plots of a meta-analysis of four randomized trials comparing no adjuvant chemotherapy with adjuvant chemotherapy in early-stage ovarian cancer for overall survival (A) and recurrence free survival (B). JNCI Cancer Spectrum 95(2):105-112

  10. Advantages of meta-analysis • Allows pooling of several studies = increase sample size • Gathers literature in one place • Provides a quantitative summary (possibly less bias than a narrative) • Generate hypotheses • Provide information for future trials

  11. Disadvantages of meta-analysis • Even randomized studies often differ significantly in their design, outcome, exposure measures • Publication bias • Studies differ in quality • Time trends • Health studies tend to be (comparatively) few

  12. Interpreting the results of a meta-analysis • Was process valid (question, search strategy, reproducible)? • Are studies comparable? • Are results similar? • What is the estimate and precision of the estimate?

  13. Conclusion • Systematic reviews : top of hierarchy of evidence • Caution before accepting findings of any systematic review without first appraising it

  14. Cautious • Attention paid to patient selection group , inter-vention, or search strategy; SR combined studies in meta-analysis pooled in different intervention or participants included

  15. 3 reasons validity finding 1) Chance 2) Bias 3) Confounding

  16. Chance • Random variation • Chance: statistical analysis (hypothesis testing and estimation.) • Avoid random variation : adequate  sample size

  17. Bias • Systematic (non-random) error in estimation of population characteristic e.g. effect of treatment compared to control in a population • Systematic means …

  18. Classification of sources of bias in analytical studies • Allocation • Performance • Placebo-effect • Attrition • Detection • Analytical • Reporting • Selection • Measurement • Analysis

  19. 1. Allocation bias • Any treatment allocation method that causes a systematic difference in participant characteristics at the start of a trial (baseline) • independent prognostic characteristics (confounders) • failure to plan e.g. confounding by indication • failure to execute

  20. 2. Performance bias • Systematic differences in the care of the two groups, other than the intervention being investigated • nursing & supportive care • monitoring for adverse effects

  21. 3. Placebo-effect bias • Placebo-effect - a beneficial effect gained because the participant believes he is receiving effective therapy (includes satisfying pat-doc relationship as well as medicinal intervention) • In trials with a “no-treatment” arm, confounding due to a differential placebo-effect may occur if the subjects are aware they are not receiving active therapy

  22. Reasons for bias - Confounding • When a non-causal association due to a common cause of both T and H prevents us from quantifying any causal association

  23. Confounding – measured & unmeasured common causes • Random variation (chance) imprecise • Systematic variation (bias) inaccurate • Confounder : factor prognostically linked to outcome and unevenly distributed btw study groups • Known confounders : stratify results- • Unknown confounders: randomisation

  24. Confounding – measured & unmeasured common causes Non-causal assoc drug cancer Smoking Supportive care Placebo-effect

  25. 4. Attrition bias • All clinical trials have a period of follow-up, attrition occurs when subjects do not complete the follow-up process (loss to follow-up) • This is harmful because attrition causes loss of information and hence less precise estimates of the treatment effect, if too many subjects cannot be analyzed • Systematic differences in the loss of participants to follow up between groups may cause bias if the analysis is improper e.g. analyzing only participants who had complete follow-up or who were fully compliant (per protocol analysis)

  26. 5. Detection bias • Systematic differences in outcome assessmentbtw groups • measurement method • follow-up frequency for outcomes

  27. 6. Analytical bias • Bias arising because of the method of analysis • choice of subjects to analyze • the analysis dataset • choice of statistical estimators • biased & unbiased estimators • choice of multivariate models

  28. 7. Reporting bias • Selective reporting of • clinical outcomes e.g. surrogate, subgroups • time-points e.g. early • Use of composite endpoints • component events not equally significant

  29. What is Apprasial? • A technique to increase effectiveness of reading by exclude research studies too poorly designed to inform practice.

  30. Why appraisal? • To free time of concentrate on a more systematic evaluation of studies cross quality threshold and extract salient points

  31. How to Appraise? • Appraising a Secondary studies(Review) • Validity • Impact(Results) • Practicability(Application) • Instruments tools such as CASP

  32. Critical Appraisal Skills Programme (CASP) • http://www.phru.nhs.uk/pages/PHD/CASP.htm

  33. Appraisal tools for Systematic review 10 questions to help you make sense of reviews • Is the study valid? • What are the results? • Will the results help locally? • 10 questions adapted from Oxman AD, Cook DJ, Guyatt GH, Users’ guides to medical literature. VI. How to use an overview. JAMA 1994; 272 (17): 1367-1371

  34. Screening question • First 2 questions • Screening questions can be answered quickly. • Worth proceeding If answer to both is “yes”,

  35. Screening question • 1. Did the review ask a clearly-focused question? 􀂉 Yes 􀂉 Can’t tell 􀂉 No Focused : – the population studied – the intervention given or exposure – the outcomes considered • 2. Did the review include the right type of study? 􀂉 Yes 􀂉 Can’t tell 􀂉 No included studies: – address the review’s question – have an appropriate study design • Is it worth continuing?

  36. 3. Did the reviewers try to identify all relevant studies? 􀂉 Yes 􀂉 Can’t tell 􀂉 No Consider: – which bibliographic databases were used – if there was follow-up from reference lists – if there was personal contact with experts –searched for unpublished studies –searched for non-English-language studies • 4. Did the reviewers assess the quality of the 􀂉 Yes 􀂉 Can’t tell 􀂉 No i– if a clear, pre-determined strategy was used to determine which studies were included. Look for: – a scoring system – more than one assessor

  37. 5. If the results of the studies have been combined, was it reasonable to do so? Consider – the results of each study are clearly displayed – the results were similar from study to study (look for tests of heterogeneity) – the reasons for any variations in results are discussed • 6. How are the results presented and what is the main result? Consider: – how the results are expressed (e.g. odds ratio,relative risk, etc.) – how large this size of result is and how • meaningful it is – how you would sum up the bottom-line result of • the review in one sentence

  38. 7. How precise are these results? • Consider: • – if a confidence interval were reported. Would • your decision about whether or not to use this • intervention be the same at the upper • confidence limit as at the lower confidence • limit? • – if a p-value is reported where confidence • intervals are unavailable

  39. 8. Can the results be applied to the local 􀂉 Yes 􀂉 Can’t tell 􀂉 No • population? • Consider whether: • – the population sample covered by the review • could be different from your population in ways • that would produce different results • – your local setting differs much from that of the • review • – you can provide the same intervention in your • setting • 9. Were all important outcomes considered? 􀂉 Yes 􀂉 Can’t tell 􀂉 No • Consider outcomes from the point of view of the: • – individual • – policy makers and professionals • – family/carers • – wider community • reported can it be filled in from elsewhere?

  40. 10. Should policy or practice change as a result of 􀂉 Yes 􀂉 Can’t tell 􀂉 No • the evidence contained in this review? • Consider: • – whether any benefit reported outweighs any • harm and/or cost. If this information is not

More Related