Introduction - PowerPoint PPT Presentation

slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Introduction PowerPoint Presentation
Download Presentation
Introduction

play fullscreen
1 / 102
Introduction
128 Views
Download Presentation
jerica
Download Presentation

Introduction

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Summary of Findings & Assessment of Quality of Evidence: Grade WorkshopSunday, October 17, 2010 0900 to 1700 Introduction

  2. Introduction to facilitators • Michelle Kho • Jan Brozek • Nancy Santesso • Holger Schunemann • Ingvil von Mehren Sæterdal

  3. Agenda Mix of presentations, interactive sessions, hands-on work and small group discussions

  4. Systematic review process PICO

  5. Systematic review process

  6. PICO

  7. Risk of Bias

  8. Meta-analysis

  9. Sensitivity analyses High versus lower protein diets (studies with <20% losses to follow-up) Change in Systolic blood pressure (mmHg)

  10. Subgroup analysis

  11. Funnel Plot • Medline • Search Strategy for RCTs and Reviews • -------------------------------------------------------------------------------- • 1 diet, protein-restricted/ • 2 diet, carbohydrate-restricted/ • 3 1 or 2 • 4 diet fads/ • 5 (carbohydrate* or protein*).ti,ab. • 6 4 and 5 • 7 exp dietary proteins/ • 8 dietary carbohydrates/ • 9 (diet* or intake*).ti,ab. • 10 (high* or increas* or rich or low* or restrict* or decreas* or reduc*).ti,ab. • 11 (7 or 8) and 9 and 10 • ((carbohydrate* or protein*) adj3 (high* or increas* • or rich or low* or restrict* or decreas* or reduc*)).ti,ab. • 13 12 and 9 • 14 3 or 6 or 11 or 13 • 15 randomized controlled trial.pt. • 16 controlled clinical trial.pt. • 17 randomized.ab. • 18 placebo.ab. • 19 clinical trials as topic.sh. • 20 randomly.ab. • 21 trial.ti. • 22 or/15-21 • 23 humans.sh. • 24 22 and 23 • 25 14 and 24

  12. Systematic review process

  13. Chapter 11: Presenting results and Summary of Findings Tables Chapter 12: Interpreting results and drawing conclusions Cochrane Handbook

  14. Overview: Interpreting results of a review and GRADE • how does GRADE fit into the process of moving from results to conclusions in systematic reviews • what are the basic principles behind GRADE

  15. Consider the following examples of moving from results to conclusionsHow would you interpret the results of the meta-analyses and conclusions made by the authors?

  16. Authors’ conclusions • Short term beneficial effects were found for fasting for 7 to 10 days followed by a vegetarian diet when compared to ordinary diet.

  17. The pooled SMD for pain reduction comparing glucosamine to placebo was 0.61, which represents a moderate clinically significant treatment benefit in favour of glucosamine

  18. What information do you think would increase or decrease your confidence in these results? • What information do you think would indicate that more research is or is not necessary? Work with your neighbor and discuss for 5 mins

  19. Weighing the criteria for overall quality of evidence • In fact in this example, • Allocation concealment is unclear in one of the studies • Only three of five studies measured major bleeding - a primary outcome in anticoagulation studies – suggesting selective outcome reporting • The confidence intervals include potential for harm or no harm • I might say that my confidence in the results is “low” and that more research is likely to change the results

  20. The pooled SMD for pain reduction comparing glucosamine to placebo was 0.61, which represents a moderate clinically significant treatment benefit in favour of glucosamine

  21. Likelihood of and confidence in an outcome

  22. Quality of evidence across studies for an outcome

  23. GRADE: recommendation – quality of evidence Clear separation: 1) 4 categories of quality of evidence:  (High), (Moderate), (Low), (Very low)? • methodological quality of evidence • likelihood of bias • by outcome and across outcomes 2) Recommendation: 2 grades – conditional (aka weak) or strong (for or against an intervention)? • Balance of benefits and downsides, values and preferences, resource use and quality of evidence *www.GradeWorking-Group.org

  24. GRADE Quality of Evidence In the context of a systematic review • The quality of evidence reflects the extent to which we are confident that an estimate of effect is correct. In the context of making recommendations • The quality of evidence reflects the extent to which our confidence in an estimate of the effect is adequate to support a particular recommendation.

  25. Determinants of quality • RCTs  • observational studies  • 5 factors that can lower quality • limitations in detailed design and execution (risk of bias criteria) • Inconsistency (or heterogeneity) • Indirectness (PICO and applicability) • Imprecision (number of events and confidence intervals) • Publication bias • 3 factors can increase quality • large magnitude of effect • all plausible residual confounding may be working to reduce the demonstrated effect or increase the effect if no effect was observed • dose-response gradient

  26. 1. Design and Execution/Risk of Bias Examples: • Inappropriate selection of exposed and unexposed groups • Failure to adequately measure/control for confounding • Selective outcome reporting • Failure to blind (e.g. outcome assessors) • High loss to follow-up • Lack of concealment in RCTs • Intention to treat principle violated

  27. Design and Execution/RoB From Cates , CDSR 2008

  28. Design and Execution/RoB Overall judgment required

  29. 2. Inconsistency of results(Heterogeneity) • if inconsistency, look for explanation • patients, intervention, comparator, outcome • if unexplained inconsistency lower quality

  30. Reminders for immunization uptake

  31. Judgment • variation in size of effect • overlap in confidence intervals • statistical significance of heterogeneity • I2

  32. Inconsistency when 1 study? • Do not downgrade

  33. 3. Directness of Evidencegeneralizability, transferability, applicability • differences in • populations/patients (HIC – L/MIC, women in general – pregnant women) • interventions (all techniques, new - old) • comparator appropriate (newer technique – old or no technique) • outcomes (important – surrogate: CIN I – cancer) • indirect comparisons • interested in A versus B • have A versus C and B versus C • Cryo + antibiotics versus no intervention versus Cryo - antibiotics

  34. EVIDENCE PROFILE Question: Cyrotherapy with antibiotics vs no antibiotics for histologically confirmed CIN 1 All rates presented at 12 months with assumption that events would occur within this time frame.2 Indirect analysis between single arm observational studies

  35. 4. Publication Bias • Should always be suspected • Only small “positive” studies • For profit interest • Various methods to evaluate – none perfect, but clearly a problem

  36. ISIS-4Lancet 1995 I.V. Mg in acute myocardial infarction Meta-analysisYusuf S.Circulation 1993 Publication bias Egger M, Smith DS. BMJ 1995;310:752-54

  37. Funnel plot 0 Symmetrical: No publication bias 1 Standard Error 2 3 0.1 0.3 0.6 1 3 10 Odds ratio Egger M, Cochrane Colloquium Lyon 2001

  38. Funnel plot 0 Asymmetrical: Publication bias? 1 0.4 Standard Error 2 3 0.1 0.3 0.6 1 3 10 Odds ratio Egger M, Cochrane Colloquium Lyon 2001

  39. 5. Imprecision • Small sample size • small number of events • Wide confidence intervals • uncertainty about magnitude of effect • Extent to which confidence in estimate of effect adequate to support decision

  40. Example: Immunization in children

  41. For systematic reviews • If the 95% CI excludes a relative risk (RR) of 1.0 and the total number of events or patients exceeds the OIS criterion, precision is adequate. If the 95% CI includes appreciable benefit or harm (we suggest a RR of under 0.75 or over 1.25 as a rough guide) rating down for imprecision may be appropriate even if OIS criteria are met.

  42. Optimal information size • We suggest the following: if the total number of patients included in a systematic review is less than the number of patients generated by a conventional sample size calculation for a single adequately powered trial, consider rating down for imprecision. Authors have referred to this threshold as the “optimal information size” (OIS)

  43. 25.0% 0

  44. 25.0% 0