1 / 26

Challenges in Classifying Adverse Events in Cancer Clinical Trials

Challenges in Classifying Adverse Events in Cancer Clinical Trials. Steven Joffe, MD, MPH Dave Harrington, PhD David Studdert, JD, PhD Saul Weingart, MD, PhD Damiana Maloof, RN . Disclosure. Member of clinical trial adverse event review board for Genzyme Corp (not oncology-related).

jules
Download Presentation

Challenges in Classifying Adverse Events in Cancer Clinical Trials

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Challenges in Classifying Adverse Events in Cancer Clinical Trials Steven Joffe, MD, MPH Dave Harrington, PhD David Studdert, JD, PhD Saul Weingart, MD, PhD Damiana Maloof, RN

  2. Disclosure • Member of clinical trial adverse event review board for Genzyme Corp (not oncology-related)

  3. Adverse Events in Clinical Trials • Adverse events (AEs) are critically important outcomes of clinical trials • Human subjects protection • Endpoints for judgments about benefits & risks of study interventions • Captured on Case Report Forms • Reported to oversight agencies

  4. Components of AE Assessment • Type • Severity • Relatedness to study agent(s) • Expectedness

  5. Global judgment about reportability to IRB Components of AE Assessment • Type • Severity • Relatedness to study agent(s) • Expectedness

  6. Reporting Criteria(to Dana-Farber IRB) • Grade 5 (fatal) • Grade 4, unless specifically exempted • Grade 2/3, if unexpected AND possibly, probably or definitely related • Virtually identical to NCI’s Adverse Event Expedited Reporting System (AdEERS) criteria

  7. AE Grading in Oncology • NCI’s Common Terminology Criteria for Adverse Events (CTCAE) typically used • Effort to standardize nomenclature • developed by consensus methods; no formal process to establish reliability of grading http://ctep.cancer.gov/protocolDevelopment/electronic_applications/ctc.htm#ctc_v30

  8. Aims • To assess the validity of physician reviewers’ determinations about whether AEs in cancer trials meet IRB reporting criteria • To assess the interrater reliability of reviewers’ determinations about whether AEs that occur in cancer trials meet IRB reporting criteria • To assess the validity and reliability of revie-wers’ judgments about the components of AEs

  9. Study Methods

  10. Panelists’ Roles • Review primary data from criterion sets of AEs • Rate each AE: • Classification • Grade • Relatedness • Expectedness • Reportable to IRB } from CTCAE

  11. Panelist Demographics

  12. Panelists’ Experience

  13. Panelists’ Experience

  14. Statistical Analysis • Validity of judgments regarding reportability to IRB • % agreement with gold standard • Interrater reliability of raters’ judgments • Kappa coefficients

  15. Results

  16. Criterion Set of AEs

  17. Validity of Judgments Regarding Reportability to IRB

  18. Interrater Reliability of Panelists’ Judgments

  19. Role of Experience: Rank Kappa

  20. Role of Experience: Service as PI Kappa

  21. Role of Experience: Number of AE Reports Filed Kappa

  22. Conclusions • Oncologists’ judgments about whether or not AEs require reporting to the IRB show high agreement with gold standard • Interrater reliability of oncologists’ judgments about components of AEs varies • High: expectedness of AE; need for reporting • Moderate: grade of AE • Low: relationship of AE to study agents

  23. Limitations • Small sample sizes • Criterion set of AEs • Panel of physician reviewers • Generalizability of set of AEs • Reviewers may not reflect population of investigators who file AE reports • Judgments based on document review rather than on firsthand knowledge

  24. Thoughts About Direction of Bias in Agreement Statistics • Factors biasing towards less agreement • Reviewer experience • Factors biasing towards greater agreement • Standardized set of documents for review • Criterion set selected based on maximum agreement among expert panel reviewers

  25. Implications • Judgments about AEs are complex • Human subjects: efforts to enhance reliability, or to minimize reliance on judgments about causation, are needed • Science: toxicity data from uncontrolled trials may be misleading • RCR: education about need for reporting is important but insufficient

  26. Debra Morley Anna Mattson-DiCecca Physician panelists ORI NCI Milton Fund Acknowledgments

More Related