1 / 15

AHRQ Conference September 2009

A tool for the classification of study designs in systematic reviews of interventions and exposures Meera Viswanathan, PhD for the University of Alberta EPC. AHRQ Conference September 2009. Steering Committee. Ken Bond, UAEPC Donna Dryden, UAEPC Lisa Hartling, UAEPC Krystal Harvey, UAEPC

heidi
Download Presentation

AHRQ Conference September 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A tool for the classification of study designs in systematic reviews of interventions and exposuresMeera Viswanathan, PhDfor the University of Alberta EPC AHRQ ConferenceSeptember 2009

  2. Steering Committee • Ken Bond, UAEPC • Donna Dryden, UAEPC • Lisa Hartling, UAEPC • Krystal Harvey, UAEPC • P. Lina Santaguida, McMaster EPC • Karen Siegel, AHRQ • Meera Viswanathan, RTI-UNC EPC

  3. Background • EPC reports, particularly comparative effectiveness reviews, are increasingly including evidence from nonrandomized and observational designs • In systematic reviews, study design classification is essential for study selection, risk of bias assessment, approach to data analysis (e.g., pooling), interpretation of results, grading body of evidence • Assignment of study designs is often given inadequate attention

  4. Objectives • Identify tools for classification of studies by design • Select a classification tool for evaluation • Develop guidelines for application of the tool • Test the tool for accuracy and inter-rater reliability

  5. Objective 1: Identification of tools 31 organizations/individuals contacted 11 organizations/individuals responded 23 classification tools received 10 tools selected for closer evaluation 1 tool selected for modification and testing

  6. Objective 2: Tool selection • Steering Committee ranked tools based on: • Ease of use • Unique classification for each study design • Unambiguous nomenclature and decision rules/definitions • Comprehensiveness • Potentially allows for identification of threats to validity and provides a guide to strength of inference • Developed by a well-established organization

  7. Objective 3: Tool development • Three top-ranked tools: • Cochrane Non-Randomised Studies Methods Group • American Dietetic Association • RTI-UNC • Incorporated positive elements of other tools • Developed glossary

  8. Objective 4: Testing round 1

  9. Objective 4: Testing round 1 • No clear patterns in disagreements • Disagreements occurred at all decision points • Tool vs. studies • Variations in application of the tool

  10. Objective 4: Reference standard

  11. Objective 4: Testing round 2

  12. Discussion • Moderate reliability, low agreement with reference standard • Studies vs. tool as source of disagreement • tool not comprehensive, e.g., quasi-experimental designs • studies challenging, e.g., sample of difficult studies, poor study reporting • To optimize agreement and reliability: • training in research methods • training in use of tool • pilot testing • decision rules

  13. Next Steps • Test within a real systematic review • Further testing for specific study designs • Further evaluation of differences in reliability by education, training, and experience

  14. Acknowledgments • Ahmed Abou-Setta • Liza Bialy • Michele Hamm • Nicola Hooton • David Jones • Andrea Milne • Kelly Russell • Jennifer Seida • Kai Wong • Ben Vandermeer (statistical analysis)

  15. Questions? University of Alberta EPCEdmonton, Alberta, Canada

More Related