1 / 25

Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools:

This study compares traditional meta-analysis to individual patient data meta-analysis to evaluate the impact of selective reporting in studies of depression screening tools. It assesses whether selective reporting of data-driven cutoffs exaggerates accuracy, identifies predictable patterns, and explores the impact on sensitivity and specificity. The findings suggest that selective cutoff reporting can lead to exaggerated estimates of accuracy. The study also examines the transfer of heterogeneity from sensitivity to cutoff scores.

jpapa
Download Presentation

Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools: Comparing traditional meta-analysis to individual patient data meta-analysis Brooke Levis, MSc, PhD Candidate Jewish General Hospital and McGill University Montreal, Quebec, Canada

  2. Does Selective Reporting of Data-driven Cutoffs Exaggerate Accuracy? The Hockey Analogy

  3. What is Screening? • Purpose to identify otherwise unrecognisable disease • By sorting out apparently well persons who probably have a condition from those who probably do not • Not diagnostic • Positive tests require referral for diagnosis and, as appropriate, treatment • A program – of which a test is one component Illustration: This information was originally developed by the UK National Screening Committee/NHS Screening Programmes (www.screening.nhs.uk) and is used under the Open Government Licence v1.0

  4. The Patient Health Questionnaire (PHQ-9) depression screening tool • Patient Health Questionnaire (PHQ-9) • Depression screening tool • Scores range from 0 to 27 • Higher scores = more severe symptoms

  5. Selective Reporting of Results Using Data-Driven Cutoffs • Extreme scenarios: • Cutoff of ≥ 0 • All subjects above cutoff • sensitivity = 100% • Cutoff of ≥ 27 • All subjects below cutoff • specificity = 100%

  6. Does Selecting Reporting of Data-driven Cutoffs Exaggerate Accuracy? • Sensitivity increases from cutoff of 8 to cutoff of 11 • For standard cutoff of 10, missing 897 cases (13%) • For cutoffs of 7-9 and 11, missing 52-58% of data Manea et al., CMAJ, 2012

  7. Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?

  8. Methods • Data source: • Studies included in published traditional meta-analysis on the diagnostic accuracy of the PHQ-9. (Manea et al, CMAJ 2012) • Inclusion criteria: • Unique patient sample • Published diagnostic accuracy for MDD for at least one PHQ-9 cutoff • Data transfer: • Invited authors of the eligible studies to contribute their original patient data (de-identified) • Received data from 13 of 16 eligible datasets (80% of patients, 94% of MDD cases)

  9. Methods • Data preparation • For each dataset, extracted PHQ-9 scores and MDD diagnostic status for each patient, and information pertaining to weighting • Statistical analyses (2 sets performed) • Traditional meta-analysis • For each cutoff between 7 and 15, included data from the studies that reported accuracy results for the respective cutoff in the original publication • IPD meta-analysis • For each cutoff between 7 and 15, included data from all studies

  10. Comparison of data availability

  11. Methods • Model: Bivariate random-effects* meta-analysis models • Models sensitivity and specificity at the same time • Accounts for clustering by study • Provides an overall pooled sensitivity and specificity for each cutoff, for the 2 sets of analyses • Within each set of analyses, each cutoff requires its own model • Estimates between study heterogeneity • Note:model accounts for correlation between sensitivity and specificity at each threshold, but not for correlation of parameters across thresholds *Random effects model: sensitivity & specificity assumed to vary across primary studies

  12. Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?

  13. Comparison of Diagnostic Accuracy

  14. Comparison of ROC Curves

  15. Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?

  16. Publishing trends by study

  17. Comparison of Sensitivity by Cutoff

  18. Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?

  19. Comparison of Diagnostic Accuracy

  20. Why Sensitivity Changes with Moving Cutoffs, but Not Specificity

  21. Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?

  22. Heterogeneity

  23. Summary • Selective cutoff reporting in depression screening tool DTA studies may distort accuracy across cutoffs. • It will lead to exaggerated estimates of accuracy. • These distortions were relatively minor in the PHQ, but would likely be much larger for other measures where standard cutoffs are less consistently reported and more data-driven reporting seems to occur (e.g., HADS). • IPD meta-analysis can address this and will allow subgroup-based accuracy evaluation.

  24. Summary • STARD undergoing revision: • Needs to require precision-based sample size calculation to avoid very small samples – particularly number of cases – and unstable estimates • Needs to require reporting of spectrum of cutoffs, which is easily done with online appendices

  25. Acknowledgements DEPRESSD Investigators • Brett Thombs • Andrea Benedetti • Roy Ziegelstein • PimCuijpers • Simon Gilbody • John Ioannidis • Alex Levis • Danielle Rice • Scott Patten • Dean McMillan • Ian Shrier • Russell Steele • Lorie Kloda Other Contributors

More Related