1 / 20

Course overview, the diagnostic process, and measures of interobserver agreement

Course overview, the diagnostic process, and measures of interobserver agreement. Thomas B. Newman, MD, MPH. Overview. Administrative stuff Overview of the course The diagnostic process Interobserver agreement Continuous variables Categorical variables – Kappa Regular Weighted.

carlo
Download Presentation

Course overview, the diagnostic process, and measures of interobserver agreement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Course overview, the diagnostic process, and measures of interobserver agreement Thomas B. Newman, MD, MPH

  2. Overview • Administrative stuff • Overview of the course • The diagnostic process • Interobserver agreement • Continuous variables • Categorical variables – Kappa • Regular • Weighted

  3. Administrative stuff • Introductions • Basic structure of course • New material each week in lecture • Read material before lecture if possible • HW on that material due the FOLLOWING week in section • Exceptions: • Penultimate class --review, no new material • HW assigned that day is take-home exam • Last lecture: review take home exam

  4. Homework • Required –key way of learning material • Answers posted on web • Not graded if late, but can still be turned in • Use fresh sheets of paper with your name on each, not syllabus pages • Will be read by section leaders and returned the following week

  5. Getting help • Classmates, then section leaders, then faculty • Ambiguous/confusing problems – send e-mail to section leader or me • Unless you indicate otherwise, we will assume we can cc whole class when we respond if we think question is of general interest

  6. Books • This course was initially based on Sackett, Haynes, Guyatt and Tugwell’s Clinical Epidemiology text • I love that book, but reviews from students have been mixed

  7. Books • For the last 3 years we have let students pick a book to buy. • MK and TN are turning the syllabus into a book, hence the term “Course Book” • (Suggestions welcome!)

  8. Course overview • Diagnosis • Theory • Inter-rater reliability, accuracy, usefulness • Dichotomous tests • Multilevel tests • Combining tests • Screening and prognostic tests • Treatments: randomized trials • Alternatives to randomized trials • P-values and confidence intervals; Bayes theorem

  9. Diagnostic process • Why do we want to assign a name to this person’s illness? • Different reasons lead to different classification schemes • Examples • Acute nephrotic syndrome • Acute ligamentous knee injury

  10. Other examples • Attention deficit disorder • Skin rash worth a trial of steroids • Dysuria worth a course of antibiotics • SLUBI=Self-limited undiagnosed benign illness

  11. Evaluating diagnostic tests • Reliability • Accuracy • Usefulness • Today we do reliability

  12. Simplifying assumptions (often wrong) • Test results are dichotomous • Most tests have more than two possible answers • Disease states are dichotomous • Many diseases occur on a spectrum • There are many kind of nondisease!

  13. Types of variables • Categorical • Dichotomous – 2 values • Nominal – no intrinsic ordering • Ordinal – intrinsic ordering • Continuous (infinite number of values) vs Discrete (limited number)

  14. Measuring interobserver agreement for categorical variables What is agreement?

  15. Concordance rate • What percent of the time do the 2 observers agree (exactly) • Advantage: easy to understand • Disadvantage: may be misleading if observers agree on prevalence of abnormality

  16. Concordance rate problem

  17. <Switch to chalk board> • Definition of Kappa • Calculation of expected agreement • Understanding the assumption of fixed marginals • Weighted kappa

  18. Real-life illustration: Rating of neurological examination • Types of weights, Stata illustration. • . kap overall5 dmfo, w(w) • . kap overall5 dmfo, w(w2)

  19. What does Kappa depend upon? • How well people agree • SPECTRUM within classifications • E.g., re the abnormal ones VERY abnormal? • Difficult cases can be excluded or oversampled • PREVALENCE of classifications by the various observers (and whether they agree) • Chance (random error; people can get lucky/unlucky) • Weighting scheme used

More Related