1 / 33

Appraising A Diagnostic Test

Appraising A Diagnostic Test. Clinical Epidemiology and Evidence-based Medicine Unit FKUI-RSCM. What is diagnosis ?. Increase certainty about presence/absence of disease Disease severity Monitor clinical course Assess prognosis – risk/stage within diagnosis Plan treatment e.g., location

thuy
Download Presentation

Appraising A Diagnostic Test

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Appraising A Diagnostic Test Clinical Epidemiology and Evidence-based Medicine Unit FKUI-RSCM

  2. What is diagnosis ? • Increase certainty about presence/absence of disease • Disease severity • Monitor clinical course • Assess prognosis – risk/stage within diagnosis • Plan treatment e.g., location • Stalling for time! Knottnerus, BMJ 2002

  3. Key Concept • Pre-test Probability • The probability of the target condition being present before the results of a diagnostic test are available. • Post-test Probability • The probability of the target condition being present after the results of a diagnostic test are available.

  4. Key Concept Pre-test Probability The probability of the target condition being present before the results of a diagnostic test are available. Post-test Probability The probability of the target condition being present after the results of a diagnostic test are available.

  5. Basic Principles (1) • Ideal diagnostic tests – right answers: (+) results in everyone with the disease and ( - ) results in everyone else • Usual clinical practice: • The test be studied in the same way it would be used in the clinical setting • Observational study, and consists of: • Predictor variable (test result) • Outcome variable (presence / absence of the disease)

  6. Basic Principles (2) • Sensitivity, specificity • Prevalence, prior probability, predictive values • Likelihood ratios • Dichotomous scale, cutoff points (continuous scale) • Positive (true and false), negative (true & false) • ROC (receiver operator characteristic) curve

  7. General structure : 2 X 2 table

  8. a+c a+b+c+d Prevalence Pretest probability

  9. Sensitivity • The proportion of people who truly have a designated disorder who are so identified by the test. • Sensitive tests have few false negatives. • When a test with a high Sensitivity is Negative, it effectively rules out the diagnosis of disease. SnNout

  10. Specificity • The proportion of people who are truly free of a designated disorder who are so identified by the test. • Specific tests have few false positives • When a test is highly specific, a positive result can rule in the diagnosis. SpPin

  11. SpPIn SnNOut d/b+d a/a+c Sensitivity Specificity Probability of negative test result in patients without the disease Probability of positive test result in patients with the disease

  12. SnNOut • The sensitivity of dyspnea on exertion for the diagnosis of CHF is 100% (41/(41+0)), and the specificity 17% (35/(183+35)). • If DOE, it is very unlikely that they have CHF (0 out of 41 patients with CHF did not have this symptom). • "SnNOut", which is taken from the phrase: "Sensitive test when Negative rules Out disease".

  13. SpPin • Conversely, a very specific test, when positive, rules in disease. "SpPIn"!  • The sensitivity of gallop for CHF is only 24% (10/41), but the specificity is 99% (215/218).  Thus, if a patient has a gallop murmur, they probably have CHF (10 out of 13).

  14. Sensitivity=a/a+c=90% Specificity =d/b+d=85% LR + = sn/(1-sp)=90/15=6 Pos predictive value=a/a+b=73% Neg predictive value=d/c+d=95% Prevalence= (a+c)/(a+b+c+d)= 32% Outcome Predictor Posttest odd = Pretest odd x Likelihood Ratio

  15. Odds = ratio of two probabilities • Odds = p/1-p • Probability = odds/1+odds Likelihood ratio (+): Prob(+) result in people with the disease Prob(+) result in people w/out the disease Pretest Odds X LR = Posttest Odds

  16. Key Concept • Likelihood Ratio • Relative likelihood that a given test would be expected in a patient with (as opposed to one without) a disorder of interest. probability of a test result in pts withdisease LR= probability of the test result in pts without disease

  17. Likelihood ratios (LR) General Rules of Thumb • LR > 10 or < 0.1 produce large changes in pre-test probability • LR of 5 to 10 or 0.1 to 0.2 produce moderate changes • LR of 1 to 2 or 0.5 to 1 produce small changes in pre-test probability

  18. Likelihood ratio (1-Sn)/Sp= - + = Sn/(1-Sp) do not test get on with treatment do not test do not treat Test Test A B C 0 .10 .20 .30 .40 .50 .60 .70 .80 .90 1 pretest probability posttest probability PreTest odds x LR pretest probability

  19. The usefulness of five levels of a diagnostic test result

  20. Pretest probability Likelihood ratio Posttest probability

  21. T4 level in suspected hypo- thyroidism in children For tests / predictors with continuous values result , cutoff points should be determine to choose the best value to use in distinguishing those with and without the target disorder

  22. Accuracy of the test • The accuracy of the test depends on how well the test separates the group being tested into those with and without the disease in question • Accuracy is measured by the area under the ROC curve. An area of 1 represents a perfect test; an area of 0.5 represents a worthless test (AUC) • 0.90-1.00 = excellent (A) • 0.80-0.90 = good (B) • 0.70-0.80 = fair (C) • 0.60-0.70 = poor (D) • 0.50-0.60 = fail (F)

  23. An ROC curve demonstrates several things: • It shows the tradeoff between sensitivity and specificity • any increase in sensitivity will be accompanied by a decrease in specificity • The closer the curve follows the left-hand border and then the top border of the ROC space, the more accurate the test. • The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test. • The slope of the tangent line at a cutoff point gives the likelihood ratio (LR) for that value of the test.

  24. Appraising DxTest • Is the evidence valid? (V) • Was there an independent, blinded comparison with a gold standard? • Was the test evaluated in an appropriate spectrum of patients? • Was the reference standard applied regardless of the test result? • Was the test validated in a second, independent group of patients?

  25. Can I trust the accuracy data? RAMMbo Recruitment: Was an appropriate spectrum of patients included? (Spectrum Bias) Maintainence: All patients subjected to a Gold Standard? (Verification Bias) Measurements: Was there an independent, blind or objective comparison with a Gold standard? (Observer Bias; Differential Reference Bias) Guyatt. JAMA, 1993

  26. Critical Appraisal • Is this valid test important? (I) • Distinguish between patients with and those without the disease • Two by two tables • Sensitivity and Specificity • SnNOut • SpPIn • ROC curves • Likelihood Ratios

  27. Critical Appraisal • Can I apply this test to my patient (A) • Similarity to our patient • Is it available • Is it affordable • Is it accurate • Is it precise

  28. Critical Appraisal • Can I apply this test to my patient? • Can I generate a sensible pre-test probability • Personal experience • Practice database • Assume prevalence in the study

  29. Critical Appraisal • Diagnosis • Can I apply this test to a specific patient • Will the post-test probability affect management • Movement above treatment threshold • Patient willing to undergo testing

  30. Thank You

More Related