1 / 143

Appraisal, Extraction and Pooling of Quantitative Data for Reviews of Effects

Appraisal, Extraction and Pooling of Quantitative Data for Reviews of Effects . - from experimental, observational and descriptive studies. JBI/CSRTP/ 2013-14/0002 . Introduction. Recap of Introductory module Developing a question (PICO) Inclusion Criteria Search Strategy

ellis
Download Presentation

Appraisal, Extraction and Pooling of Quantitative Data for Reviews of Effects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Appraisal, Extraction and Pooling of Quantitative Data for Reviews of Effects - from experimental, observational and descriptive studies JBI/CSRTP/2013-14/0002

  2. Introduction Recap of Introductory module Developing a question (PICO) Inclusion Criteria Search Strategy Selecting Studies for Retrieval This Module considers how to appraise, extract and synthesize evidence from experimental, observational and descriptive studies.

  3. Program Overview

  4. Program Overview

  5. Session 1: The Critical Appraisal of Studies

  6. Why Critically Appraise? Combining results of poor quality research may lead to biased or misleading estimates of effectiveness 1004 references 172 duplicates 832 references Scanned Ti/Ab 715 do not meet Incl. criteria 117 studies retrieved 82 do not meet Incl. criteria 35 studies for Critical Appraisal

  7. The Aims of Critical Appraisal • To establish validity • to establish the risk of bias

  8. Used locally? External Validity Relationship between IV and EV? Internal Validity Internal & External Validity

  9. Strength & Magnitude How large is the effect? How internally valid is the study? Strength Magnitude & Precision

  10. Clinical Significance and Magnitude of Effect • Pooling of homogeneous studies of effect or harm • Weigh the effect with cost/resource of change • Determine precision of estimate

  11. Assessing Methodological Quality • Numerous tools are available for assessing methodological quality of clinical trials and observational studies. • JBI requires the use of a specific tool for assessing risk of bias in each included study. • ‘High quality’ research methods can still leave a study at important risk of bias (e.g. when blinding is impossible). • Some markers of quality are unlikely to have direct implications for risk of bias (e.g. ethical approval, sample size calculation).

  12. Sources of Bias • Selection; • Performance; • Detection; and • Attrition.

  13. Selection Bias • Systematic differences between participant characteristics at the start of a trial. • Systematic differences occur during allocation to groups. • Can be avoided by concealment of allocation of participants to groups.

  14. Performance Bias • Systematic differences in the intervention of interest, or the influence of concurrent interventions. • Systematic differences occur during the intervention phase of a trial. • Can be avoided by blinding of investigators and/or participants to group.

  15. Detection Bias • Systematic differences in how the outcome is assessed between groups. • Systematic differences occur at measurement points during the trial. • Can be avoided by blinding of outcome assessor.

  16. Attrition Bias • Systematic differences in withdrawals and exclusions between groups. • Can be avoided by: • Accurate reporting of losses and reasons for withdrawal • Use of ITT analysis

  17. Ranking the “Quality” of Evidence of Effectiveness • To what extent does the study design minimize bias/demonstrate validity. • Generally linked to actual study design in ranking evidence of effectiveness. • Thus, a “hierarchy” of evidence is most often used, with levels of quality equated with specific study designs.

  18. JBI Levels of Evidence - Effectiveness

  19. The Critical Appraisal Process • Every review must set out to use an explicit appraisal process. Essentially, • A good understanding of research design is required in appraisers; and • The use of an agreed checklist is usual.

  20. Session 2: Appraising RCTs and experimental studies

  21. RCTs RCTs and quasi (pseudo) RCTs provide the most robust form of evidence for effects Ideal design for experimental studies They focus on establishing certainty through measurable attributes They provide evidence related to: whether or not a causal relationship exists between a stated intervention, and a specific, measurable outcome, and the direction and strength of the relationship These characteristics are associated with the reliability and generalizability of experimental studies

  22. Randomized Controlled Trials • Evaluate effectiveness of a treatment/therapy/ intervention. • Randomization critical. • Properly performed RCTs reduce bias, confounding factors, and results by chance.

  23. Experimental studies • Three essential elements • Randomization (where possible). • Researcher-controlled manipulation of the independent variable. • Researcher control of the experimental situation.

  24. Other experimental studies • Quasi-experiments without a true method of randomization to treatment groups • Quasi experiments • Quasi-experimental designs without control groups • Quasi-experimental designs that use control groups but not pre-tests • Quasi-experimental designs that use control groups and pre-tests

  25. Sampling • Selecting participants from population. • Inclusion/exclusion criteria. • Sample should represent the population.

  26. Sampling Methods • Probabilistic (Random) sampling • Consecutive • Systematic • Convenience

  27. Randomization

  28. Randomization Issues • Simple methods may result in unequal group sizes • Tossing a coin or rolling a dice • Block randomization • Confounding factors due to chance imbalances • stratification – prior to randomization • ensures that important baseline characteristics are even in both groups

  29. Block Randomization • All possible combinations ignoring unequal allocation 1 AABB 4 BABA 2 ABAB 5 BAAB 3 ABBA 6 BBAA • Use table of random numbers and generate allocation from sequence e.g. 533 2871 • Minimize bias by changing block size

  30. Stratified Randomization

  31. Blinding • Method to eliminate bias from human behaviour • Applies to participants, investigators, assessors etc • Blinding of allocation • Single, double and triple blinded

  32. Blinding Schulz, 2002

  33. Intention to Treat • ITT analysis is an analysis based on the initial treatment intent, not on the treatment eventually administered. • Avoids various misleading artifacts that can arise in intervention research. • E.g. if people who have a more serious problem tend to drop out at a higher rate, even a completely ineffective treatment may appear to be providing benefits if one merely compares those who finish the treatment with those who were enrolled in it. • Everyone who begins the treatment is considered to be part of the trial, whether they finish it or not.

  34. Minimizing Risk of Bias • Randomization; • Allocation; • Blinding; and • Intention to treat (ITT) analysis.

  35. Appraising RCTs/quasi experimental studies JBI-MAStARI Instrument

  36. high quality Included studies cut off point Excluded studies poor quality Assessing Study Quality as a Basis for Inclusion in a Review

  37. Group Work 1 • Working in pairs, critically appraise the two papers in your workbook • Reporting Back

  38. Session 3: Appraising Observational Studies

  39. Rationale and potential of observational studies as evidence • Account for majority of published research studies • Need to clarify what designs to include • Need appropriate critical appraisal/quality assessment tools • Concerns about methodological issues inherent to observational studies • Confounding, biases, differences in design • Precise but spurious results

  40. Appraisal of Observational Studies • Critical appraisal and assessment of quality is often more difficult than RCTs. • Using scales/checklists developed for RCTs may not be appropriate. • Specific tools are available to appraise observational research.

  41. Confounding Factors • The apparent effect is not the true effect. • May be other factors relevant to outcome in question. • Can be important threat to validity of results. • Adjustments for confounding factors can be made • E.g. has multivariate analysis been conducted? • Authors often look for plausible explanation for results.

  42. Bias • Selection bias • differ from population with same condition. • Follow up bias • attrition may be due to differences in outcome. • Measurement/detection bias • knowledge of outcome may influence assessment of exposure and vice versa.

  43. Observational Studies - Types • Cohort studies • Case-control studies • Case series/case report • Cross-sectional studies

  44. Cohort Studies • Group of people who share common characteristic. • Useful to determine natural history and incidence of disorder or exposure. • Two types • prospective (longitudinal) • retrospective (historic) • Aid in studying causal associations.

  45. Prospective Cohort Studies Taken from Tay & Tinmouth, 2007

  46. Prospective Cohort Studies • Longitudinal observation through time • Allows investigation of rare diseases or long latency • Expensive • Increased likelihood of attrition • Long time to see useful data

More Related