1 / 28

Assessing the Quality of Individual Studies

Assessing the Quality of Individual Studies. Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide www.ahrq.gov. Systematic Review Process Overview. Learning Objectives. To describe the concept of quality assessment

bryanne
Download Presentation

Assessing the Quality of Individual Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessing the Quality of Individual Studies Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for Systematic Reviews Methods Guide www.ahrq.gov

  2. Systematic Review Process Overview

  3. Learning Objectives • To describe the concept of quality assessment • To identify reasons for quality assessment • To list the steps in quality assessment • To describe and report the methods for quality assessment

  4. What Is Quality Assessment? • Definition of quality: • “[T]he extent to which all aspects of a study’s design and conduct can be shown to protect against systematic bias, nonsystematic bias, and inferential error.” (Lohr & Carey, 1999) • Considered to be synonymous with internal validity • Relevant for individual studies • Distinct from assessment of risk of bias for a body of evidence Lohr KN, Cary TS. Jt Comm J Qual Improv 1999;25:470-9.

  5. What Are the Components of Quality Assessment? • Systematic Errors: • Include selection bias and confounding, in which values tend to be inaccurate in a particular direction • Nonsystematic Errors: • Are attributable to chance • Inferential Errors: • Result from problems in data analysis and interpretation, such as choice of the wrong statistical measure or wrongly rejecting the null hypothesis Lohr KN, Cary TS. Jt Comm J Qual Improv 1999;25:470-9.

  6. Consider the Contribution of an Individual Study to the Body of Evidence Quality (systematic error and inferential error, type of study) Risk of Bias Precision Size of study (nonsystematic or random error) Consistency Direction and magnitude of results Direct vs. indirect comparisons, health outcomes vs. surrogate outcomes Directness Applicability Relevance of results to key questions Owens DK, et al. In: Methods guide for comparative effectiveness reviews. Available at: http://effectivehealthcare.ahrq.gov/ehc/products/60/318/2009_0805_grading.pdf.

  7. Reasons for Quality Assessment • Quality assessment is required for: • Interpreting results • Grading the strength of the body of evidence • Quality assessment may also be used for: • Selecting studies for the review (based on a priori assessment of evidence gaps) • Selecting studies for qualitative synthesis • Selecting studies for quantitative synthesis • Interpreting heterogeneous findings

  8. What Are the Steps in Quality Assessmentof Each Individual Study (I)? • Classify the study design • Apply predefined criteria for quality assessment of each outcome based on: • Study design — sources of bias may vary by design • Study conduct — poor study conduct and discrepancy between design and conduct may increase risk of bias • Reporting — quality assessment may be influenced by adequacy of reporting when information on study design and conduct are missing Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf. .

  9. What Are the Steps in Quality Assessment of Each Individual Study (II)? 3. Arrive at a summary judgment of the study’s quality to rate studies good, fair, or poor • Requires resolution of conflicts when two reviewers independently evaluate quality Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  10. Questions To ConsiderWhen Classifying Study Design • Did the study have more than one group or arm? If so, was a control group present? • Did investigators have control over allocation and timing of the intervention? • Did investigators randomly allocate subjects to interventions? • Did investigators concurrently measure intervention and exposure status for intervention and comparison groups? • Did investigators concurrently measure outcomes for intervention and comparison groups?

  11. Apply Predefined Criteria for All Study Types (I) • Select a tool that is based on coverage of important criteria • Based on the topic, select and apply one of several available tools that consider and explain how to evaluate: • Similarity of groups at baseline in terms of baseline characteristics and prognostic factors • Validity of primary outcomes • Blinded measurement of outcomes Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  12. Apply Predefined Criteria for All Study Types (II) • Apply one of several available tools that consider • Intention-to-treat analysis • Differential loss to followup between the compared groups or overall high loss to followup • Conflict of interest

  13. Additional Criteria for Trials • Methods used for randomization • Allocation concealment • Blinding of subjects and providers Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  14. Additional Criteria for Observational Studies (I) • Sample size, width of confidence intervals, or power • Methods for selecting participants • Inception cohort, methods to adjust for or avoid selection bias • Methods for measuring exposure variables Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  15. Additional Criteria for Observational Studies (II) • Methods for dealing with any design-specific issues such as recall bias and interviewer bias • Analytical methods to control confounding • Matching, stratification, multivariate analysis, or other statistical adjustment Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  16. Arrive at a Comprehensive Judgment of Quality • After assessment of individual criteria, assign ratings of “good,” “fair,” or “poor” (attributes described in later slides) • Assess quality for each outcome of interest • Base ratings on the evaluation of likely effect of design or execution flaws on internal validity, rather than a nominal failure to meet every quality criterion • Adjudicate differences between raters in a transparent manner when two raters independently assess overall quality Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  17. Attributes of Good-Quality Studies • Design and conduct of study address risk of bias • Appropriate measurement of outcomes • Appropriate statistical and analytical methods • Low drop-out rates • Adequate reporting of statistical and analytical methods, drop-out rates and reasons, and outcomes (no reporting errors) Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  18. Attributes of Fair-Quality Studies • Do not meet all the criteria required for a rating of good quality • No flaw is likely to cause major bias • Missing information often drives rating Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  19. Attributes of Poor-Quality Studies • Significant biases • Inappropriate design, conduct, analysis, or reporting • Large amounts of missing information • Discrepancies in reporting Helfand M, Balshem H. In: Methods guide for comparative effectiveness reviews. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.

  20. Treatment of Poor-Quality Studies in the Review • Poor-quality studies may be excluded or included • Base decisions on gaps in current evidence and availability of good-quality or fair-quality studies • Justify selective inclusion of poor-quality studies for subgroups or subquestions

  21. Reporting Quality Ratings • Accompany overall quality rating for individual studies with a statement of: • Flaws in the design or execution of a study • The potential consequences of that flaw • Report the criteria and the process used to arrive at a quality rating

  22. Key Messages (I):Definition of Quality Assessment • Quality assessment: • Is synonymous with internal validity • Refers to individual studies • Contributes to, but is separate from, the evaluation of the risk of systematic bias for the body of evidence

  23. Key Messages (II): Rationale of and Steps in Quality Assessment • Results of quality assessment are used in multiple steps in the systematic review process, from final inclusion of studies to interpretation of evidence • Steps in quality assessment • Study design classification • Assessment of individual quality criteria • Summary judgment of the study quality

  24. Key Messages (III): Reporting • Transparency of process • Full reporting of all elements of quality for each individual study • Explicit description (and examples) of how each criterion was operationalized • Clear reporting of how team members scored quality • Description of how conflicts between raters were resolved • Transparency of judgment • Explanation of final rating

  25. References (I) • Deeks JJ, Dinnes J, D’Amico R, et al, for the International Stroke Trial Collaborative Group and the European Carotid Surgery Trial Collaborative Group. Evaluating non-randomised intervention studies. Health Technol Assess 2003;7(27):iii-x,1-173. • Hartling L, Bond K, Harvey K, et al. Developing and testing a tool for the classification of study designs in systematic reviews of interventions and exposures. (Prepared by the University of Alberta Evidence-based Practice Center under contract no. 290-02-0023.) In press. • Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews. Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq. gov/ehc/products/ 60/294/2009_0805_principles1.pdf.

  26. References (II) • Lohr KN, Carey TS. Assessing “best evidence”: issues in grading the quality of studies for systematic reviews. JtComm J QualImprov 1999;25:470-9. • Owens DK, Lohr KN, Atkins D, et al. Grading the strength of a body of evidence when comparing medical interventions. In: Methods guide for comparative effectiveness reviews. Rockville, MD: Agency for Healthcare Research and Quality, Posted July 2009. Available at: http://effectivehealthcare.ahrq.gov/ehc/products/60/318/2009_0805_grading.pdf. • Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int J Epidemiol 2007;36:666-76.

  27. References (III) • West S, King V, Carey TS, et al. Systems to Rate the Strength of Scientific Evidence, Evidence Report/Technology Assessment No. 47 (Prepared by the Research Triangle Institute–University of North Carolina Evidence-based Practice Center under Contract No. 290-97-0011. Rockville, MD: Agency for Healthcare Research and Quality, March 2002. AHRQ Publication No. 02-E015. Available at: http://www.ahrq.gov/clinic/epcsums/strengthsum.pdf. • Whiting P, Rutjes AWS, Dinnes J, et al. Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technol Assess 2004;8(25):iii, 1-234.

  28. Author • This presentation was prepared by Meera Viswanathan, Ph.D., a member of the Research Triangle Institute–University of North Carolina Evidence-based Practice Center. • The presentation is based on an update of chapter 6 in version 1.0 of the Methods Guide for Comparative Effectiveness Reviews (update available at: http://www.effectivehealthcare. ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf).

More Related