1 / 61

Trimming screening tests and modern psychometrics

Trimming screening tests and modern psychometrics. Paul K. Crane, MD MPH General Internal Medicine University of Washington. Outline. Background on screening tests 2x2 tables ROC curves Consideration of strategies for shortening tests A word or two on testlets. I. Background on Screening.

phector
Download Presentation

Trimming screening tests and modern psychometrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trimming screening tests and modern psychometrics Paul K. Crane, MD MPH General Internal Medicine University of Washington

  2. Outline • Background on screening tests • 2x2 tables • ROC curves • Consideration of strategies for shortening tests • A word or two on testlets

  3. I. Background on Screening

  4. Purposes of measurement • Discriminative (e.g. screening) • Evaluative (e.g. longitudinal analyses) • Predictive (e.g. prognostication) • After Kirshner and Guyatt (1985)

  5. Information curves applied to K&G

  6. Diagnostic medicine as a Bayesian task Disease probability treat test Don’t test

  7. Diagnostic medicine as a Bayesian task Disease probability treat 3 1 2 test Don’t test

  8. Screening tests are the same – only different • Screening implies applying the test to asymptomatic individuals in whom there is no specific reason to suspect the disease • In the previous slides, the test/don’t test threshold of 0 • Often result of screening test is need for further testing rather than a specific diagnosis • Need for biopsy rather than need for chemotherapy

  9. Screening tests Disease probability treat test further screen (don’t test)

  10. Rationale for screening • Screening tests should be applied when they may make a difference • Effect on management • Some difference in outcome between disease detected in asymptomatic people as opposed to disease detected in symptomatic patients • (note people vs. patients) • If no difference in outcome, no benefit from expenditures on screening • Implies a disease model of worsening disease in which an intervention early prevents subsequent badness

  11. Screening isn’t always a good idea • Lung cancer in smokers with CT scans (disease grows too fast) • Liver cancer with CEA in Hep C patients (yield too low, false negatives too common – test isn’t accurate enough) • Breast cancer in young women (disease is less prevalent but more aggressive, breast biology leads to higher false-positive rates in young women, which in combination lead to unacceptable morbidity for a negligible mortality benefit)

  12. What about dementia? • No DMARD equivalents (so far); marginal benefit to early detection • Planning, QOL decisions, etc. • Potential harm in early detection? • There are those who advocate population-based screening now (Borson 2004) • USPSTF says evidence insufficient to recommend for or against screening • Spiegel letter to editor (2007); Brodaty paper (2006) • Primary purpose is research

  13. What about CIND / MCI? • Even less rationale for population-based screening • In several studies, while patients with MCI have an increased risk of progressing to AD, for any individual with MCI their risk for reverting to normal is higher than their risk for AD. • No intervention known to reduce rates of conversion • Again, research rationale

  14. Research rationale • Parameter of interest is the rate of disease in the general population (or other denominator) • Most valid way: gold standard test applied to entire population • Chicago study, ROS, some others • Problems: expensive • General idea: apply a screening test / strategy to determine who should receive gold standard eval • Most of the epidemiological studies of cognitive functioning use this strategy

  15. 2-stage sampling • 1st stage: everyone in enumerated sample receives a screening test/strategy • 2nd stage: some decision rule is applied to the 1st stage results to identify people who receive further evaluations to definitively rule-in or rule-out disease • Analysis: disease status from 2nd stage extrapolated back to the underlying sample from the 1st stage

  16. Variations on a theme • Simplest: single cut-point, no sampling over the cutpoint • EURODEM, ACT • Slight elaboration: single cut-point, 100% below, small % above • Can address possibility of false negatives • CSHA • Fancier still: age/education adjusted cutpoints, sampling (Cache County) (also case-cohort design, which is even more fancy)

  17. Validity of the screening protocol • Imagine an epidemiological risk factor study • Risk factor is correlated with educational quality (e.g. smoking, obesity, untreated hypertension…) • Educational quality is associated with DIF on the screening test • People with lower education have lower scores for a given degree of actual cognitive deficit • Borderline people with higher education more likely to escape detection by the screening test; ignored by the study • Rates extrapolated back: biased study

  18. DIF in screening tests • DIF thus becomes a key feature for validity of epidemiological investigations of studies that employ 2-stage sampling designs • Crane et al. Int Psychogeriatr 2006; 18: 505-15. • Overwhelmingly ignored in the literature • Entire session on epidemiological studies of HTN at VasCOG 2007 in which education and SES were not mentioned at all • Not really the focus this year, but could be an important feature of validation

  19. Test accuracy in 2-stage sampling • Begg and Greenes. Assessment of diagnostic tests when disease verification is subject to selection bias. Biometrics. 1983;39:207-215 (web site) • Straight-forward way to extend back to the original population

  20. Quality of papers on diagnostic tests • STARD initiative. Ann Int Med 2003 (web site) • Provides a guideline for high-quality articles about diagnostic or screening tests • We should play by these rules • There is a checklist (p. 42) and a flow chart (p. 43; next slide) • Nothing too surprising • Reviews on quality of papers about diagnostic and screening tests: quality is terrible.

  21. STARD flowchart

  22. II. 2x2 tables

  23. Set up of 2x2 tables

  24. Summaries of 2x2 tables: SN, SP • Sensitivity • TP/diseased • Proportion of those with disease caught by the test • Specificity • TN/non-diseased • Proportion of those who truly don’t have the disease correctly identified by the test

  25. Summaries of 2x2 tables: LR • Pos LR • TP/Test positives. • Proportion of those with a positive test who actually have the disease • Neg LR • TN/Test negatives • Proportion of those with a negative test who actually don’t have the disease

  26. SPIN, SNOUT • Need a (positive result on a) SPecific test to rule something IN • Need a (negative result on a) SENsitive test to rule something OUT • Decent rule of thumb but doesn’t apply pre-test probabilities

  27. II. ROC curves

  28. ROC curves • "Signal Dectection Theory" • World War II -- analysis of radar images • Radar operators had to decide whether a blip on the screen represented an enemy target, a friendly ship, or just noise • Signal detection theory measured the ability of radar receiver operators to do this, hence the name Receiver Operator Characteristics • In the 1970's signal detection theory recognized as useful for interpreting medical test results http://gim.unmc.edu/dxtests/roc3.htm

  29. ROC basics • ROC curves plot • sensitivity vs. (1-specificity) • (the true positive rate vs. the false negative rate) • at each possible cutpoint • Useful for visualizing the impact of various potential cutoff points on a continuous measure (continuous  binary) • Economic decision on cutpoint; no single right answer

  30. Limitations of ROC curves • Not intended to help with choosing particular items or for improving tests • Doesn’t tell us which parts of the test (items) are helpful in the region of interest • Doesn’t help us in combining the best items from several tests

  31. ROC curve for dementia from ACT Area under ROC curve = 0.9105 31 30 50 48 32 33 34 1.00 35 36 37 38 39 40 41 42 43 44 45 46 0.75 47 48 49 50 51 0.50 52 53 54 55 56 0.25 57 58 59 60 61 62 0.00 0.00 0.25 0.50 0.75 1.00 1 - Specificity

  32. Optimality from an ROC curve • Always tradeoffs between sensitivity and specificity • Also numbers of people who need to be evaluated with the gold standard test (number of individuals who will screen positive) • Optimal point depends on consequences of missing cases (false negatives), costs of working up false positives • Breast cancer: 10:1 for sufficient sensitivity

  33. What about normal/CIND/dementia? • Chengjie Xiong: ROC surface (Stat Med 2006; 25:1251-1273) • May have the same issues in terms of tradeoffs • Does missing a case of CIND have the same impact as missing a case of dementia? • Should we try to use the same tool to do both tasks? Dementia/normal is an easier target than CIND/normal. Dementia/CIND is hard and primarily depends on whether deficits have a functional impact, which in turn is very hard to tease out

  34. III. Shortening of psychometric tests: strategies used in the literature

  35. Search strategy • “short*” • “psychometric test*” • #1 AND #2 • Convenience sample of resulting articles • One or two examples of each technique

  36. CTT strategies • Bengtssen et al (2007): item:total correlations >0.80; missing>5% • Standard CTT approaches to limiting an item pool (also commonly see low item:total correlations excluded) • Doesn’t use disease status

  37. Brute force strategies • Christensen et al. (2007) looked at all pairs of 2 tests for each subdomain and compared based on alpha and correlation with the subdomain (Psychological Assessment; WAIS-3 SF)

  38. Regression strategies • Regress on total score (for evaluative tests) or use logistic regression approaches • Sapin et al (2004) used linear regression to predict a longer measure; nice series of validation steps including an independent validation sample • Eberhard-Gran et al (2007) used stepwise linear regression; no external validation • Problems: overfitting, ignores colinearity of items • Need a 2nd confirmatory sample and/or some bootstrapping approach for model optimism • Also need a modeling strategy: Forwards/backwards stepwise? Best subsets?

  39. EFA strategies • Rosen et al. (2007) looked at loadings from EFA and chose items with the highest loadings • No use of external (disease status) information • Highest loadings (// to highest item discriminations) has nothing to do with item difficulty; may well end up selecting highly discriminatory items with no relevance to disease/no disease

  40. CFA strategies • Bunn et al. (2007) used MPLUS: CFA on a new sample, modified paths and eliminated items to improve fit statistics • No independent sample confirmation • No disease status reference

  41. IRT strategies • Gill et al. (2007) – Bayesian IRT but I can’t figure out how they reduced their scale • Beevers et al. (2007) used nonparametric IRT – single sample. If the items looked bad they threw them out. Psych Assessment • Both of these papers relied only on IRT parameters to reduce the scale, not anything external (e.g. disease status)

  42. Combining IRT with external information • Combine item characteristic / information curves with some indicator of disease status • Paul’s old idea: ROC curves, identify region of interest, determine items with maximal information in that region • Rich’s new and simpler (and thus likely better) idea: box plots for diseased and non-diseased individuals superimposed on the ICCs/IICs • Takes advantage of the fact that item difficulty and person ability are on the same scale

  43. Rich’s idea

  44. Paul’s idea

  45. Extensions of IRT / external information approaches • Targeted creation / addition of new items in particular regions of the theta scale seems like a reasonable strategy • We have only talked about fixed forms – CAT is a reasonable extension • CAT likely more relevant for evaluative tests • Could terminate early if results became clear – reduced burden for those not close to the threshold

  46. Other strategies • Decision trees • A bit like PCA • Based entirely on relationships with disease • Random forests • Machine learning technique; extension of decision trees • Microarray and GWA applications • Jonathan Gruhl – expertise obtained since he first heard of this topic on Monday

  47. General comments • Literature is pretty wide open • Seems like IRT provides some useful tools • IRT wedded to distribution of scores of diseased / non-diseased individuals seems like a good strategy • Machine learning tools are interesting • Ignore covariation between items, the theoryitem link • Hope to compare/contrast strategies with CSHA data

  48. IV. Testlet response theory

  49. Rationale • IRT posits unidimensionality: a single underlying latent trait (domain) explains observed covariation between items • Various tools to address this assumption • Literature essentially always concludes that scales are sufficiently unidimensional to do what the investigator wanted to do in the first place • See JS Lai, D Cella, P Crane, “Factor analysis techniques for assessing sufficient unidimensionality of cancer related fatigue.” Qual Life Res 2006; 15: 1179-90.

  50. Dimensionality of cognitive screening tests • Initial MMSE and 3MS papers do not mention different cognitive domains • Initial CASI paper (also by Evelyn Teng) describes 9 domains: • long-term memory, • orientation, • attention, • concentration, • short-term memory, • language, • visual construction, • fluency, • abstraction and judgment

More Related