1 / 33

Choose the best items: A basic psychometric toolkit for testmakers Warren Lambert Vanderbilt Kennedy Center February 2

Choose the best items: A basic psychometric toolkit for testmakers Warren Lambert Vanderbilt Kennedy Center February 2007. Examples of Recent Test Development by KC Investigators. Peabody Two different tests of school-based reading ability A test of school-based math skills

hannelore
Download Presentation

Choose the best items: A basic psychometric toolkit for testmakers Warren Lambert Vanderbilt Kennedy Center February 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Choose the best items: A basic psychometric toolkit for testmakers Warren Lambert Vanderbilt Kennedy Center February 2007

  2. Examples of Recent Test Development by KC Investigators Peabody Two different tests of school-based reading ability A test of school-based math skills Very early signs of autism spectrum in infants A battery of 10 new tests for tracking mental health treatment of children VUMC Somatizing in children with recurrent abdominal pain Survey of attending MD satisfaction with a department in hospital Psychological rigidity in children Goal of Today’s Session Provide tools for people making their first index or test.

  3. What Is a “Test” Could be questionnaire A set of items in a structured interview Signs & symptoms of something Often a “fuzzy” construct with numerous imperfect indicators, e.g. Beck Depression Inventory, SF-36, CBCL Tests gain reliability by combining imperfect items into a total score. The sum of items will be more reliable than any single item. A test is a set of items that produces a total score

  4. How to Identify the Best ItemsA toolkit, not an analytic plan Flag weaker items to drop or revise Identify the weaker items Relative, not absolute criteria Classical test theory Enough for most medical research Floors or ceilings restrict variance To increase Cronbach’s alpha, avoid low item-total correlations Guesstimate test length with Spearman-Brown formula Factor analysis (exploratory and confirmatory) Are there items that don’t fit the construct? Avoid items that do not load on the main factor See how well a confirmatory model fits Rasch modeling Pick items that fit a carefully considered measurement model Consider item difficulties more deeply Pick items that suit the intended task Informal Formal

  5. Psychometrics vs Statistics Statistics: Find a statistical model that fits your data Psychometric test construction: Find data that fits your statistical model Choose sound measurement models and pick items that fit by dropping weaker items.

  6. Classical Test Theory (CTT) Basic description of items Can be done with SAS SPSS STATA S+ R etc Do this routinely with scales old and new Informal test development e.g. one-shot ad hoc index for an article Not enough for tests that will be widely used in many settings

  7. Note Floors or Ceilings The “Too Short” IQ Test (TS-IQ) Low mean, SD, variance all indicate floors or ceilings, but kurtosis is very easy to spot. The “Too Short” IQ Test data set with SAS and SPSS code available for download http://kc.vanderbilt.edu/quant/Seminar/schedule.htm

  8. Hard, Medium, & Easy Items#1, #6, #10 Measuring entire population requires a range of item difficulties. If everyone has the same score, the item gives no information. Kurtosis: 11 -2 3 Ceiling Floor

  9. Use Excel Conditional Formatting to Flag Problems

  10. Retain Flagged Estimates of Quality“Too Short IQ Test” (TS-IQ)

  11. Item-total Correlations How can you add unrelated things into a single total?? If an item is uncorrelated with other items, it doesn’t contribute to the internal-consistency reliability of the total score Software packages like SAS SPSS etc will do item-total correlations very easily Good check to use routinely

  12. Biological Age Index (Frailty) Negative Item-Total Correlations Are BadForgot to “flip” items on left Make sure all items are high-is-good or high-is-bad Goffaux, J., G. C. Friesinger, Lambert, E.W. et al. (2005). "Biological age--a concept whose time has come: a preliminary study." South Med J98(10): 985-93.

  13. “TS-IQ,” Low Item-Total r’s are BadSPSS Reliability or SAS PROC CORR ALPHA SPSS Relilability

  14. “Too Short” Item-Total Correlations • Items with nothing in common would not have a reliable total score • Cronbach’s alpha internal consistency reliability • Reliability increases with high item-total correlations

  15. How Many Items?Spearman-Brown’s Predicted Reliability = F(N Items) Classical Test Theory: Reliability increases with the number of items Put the the S-B formula into Excel to see approximately how many items you need for desired reliability under CTT. Brown, W. (1910). Some experimental results in the correlation of mental abilities. British Journal of Psychology, 3, 296-322. Spearman, C. (1910). Correlation calculated from faulty data. British Journal of Psychology, 3, 171-195.

  16. How Much is Enough? • For local use of an ad hoc research index, CTT may suffice • Formal tests (available for general use) require more thorough psychometric analysis • Factor analysis and Item Response Theory modeling

  17. Factor Analysis (FA)Beginning formal test development Goal is to make sure the test’s theory foundations agree with the test data We want to produce one or more single-factor tests Use EFA (exploratory factor analysis) and CFA (confirmatory factor analysis)

  18. Scree Plot of “TS-IQ” • Run a principal components analysis with SAS, SPSS etc • “Scree” plot of eigenvalues • Cattell’s metaphor, a mountain rising above useless rubble • Is there more than one big component? • Hard to get multiple factors (subtests) from the “Too short IQ test” • Kaiser criterion, min eigenvalue > 1 extremely liberal, makes unstable factors

  19. Formal Test ConstructionVUMC Pediatric Researcher’s Three Samples • N = 181 children rating understandability of items • N = 513 Psychometric sample 1 • Psychometric sample 2, N = 675 • 2a. Random 50% N=346 exploratory sample (CTT, EFA) • 2b. Random 50% confirmatory sample (CTT, CFA)

  20. Confirmatory Factor Analysis See how well (ha! how badly!) the data fit a theory-driven model: “factorial validity” Theory: TS-IQ measures a single dimension of intelligence. Run a measurement model Look at fit indices Very popular in psychology, rarely done in nonpsychiatric medicine (exception: SF-36 has extensive psychometric analysis)

  21. “Too Short IQ” SAS CFA of single-factor measurement model Warning: So far, most VU tests early in their development haven’t met the high standards for measurement model fit. RMSEA < .05, CFI > 0.95 or 0.96 (very high standards of unidimensionality)

  22. Run Rasch or IRTIRT, Item Response Theory Rasch: One parameter logistic model Good for practical test development (converges) E.g. Winsteps ($100 or $200) Item Response Theory (IRT) 1-2-3 parameter models Good for research Need large samples E.g. Parscale, Bilog-MG, Multilog ($100 VU site license) P = Prob of getting item i “right” Theta = persons ability B = item’s difficulty on same scale

  23. “Measure score” for person and item in same units If you’re better than the item, p (right) > 50% 1 Parm logistic model Rasch Model As (Person – Item) increases, prob (right) increases in logistic model.

  24. Rasch Model Items spread over a range of difficulties Easy items Hard items http://en.wikipedia.org/wiki/Rasch_model

  25. WINSTEPSOne-parameter Rasch program(see http://www.winsteps.com)$200 ($99 on summer sale)

  26. “TS IQ” Items Information Spread Across Whole Range Easy items, like #10, are most informative about low scoring individuals Hard items, like #1, are most informative about high scoring individuals. This test’s items spread to describe whole range of IQs

  27. Persons & Items on One Scale Rasch model measures each item and each person on the same scale Concentrate your items where they are needed Measure everyone Measure high clinical cases most efficiently TS-IQ measures across a wide range

  28. School sample High is bad (sicker) Clinical screens focus on sick people Classify: treat yes-no Job is to be maximally informative at the cutpoint This test invests its items in severe range VUMC Clinical Test Focuses on CutpointUnlike the TS-IQ

  29. Putting It All TogetherTS-IQ’s Items and Total

  30. Putting It All TogetherVUMC Pediatrics Items go 0-4 Many items near the floor (LE 1) The lowest few have excessive kurtosis However many item-total rs and Rasch fit stats are OK Test maker can shorten this with considerable latitude, e.g. with content analysis.

  31. Putting It All Together Test has one odd item that measures something else. Drop or revise that item.

  32. How to Identify the Best ItemsA toolkit, not an analytic plan Flag weaker items to drop or revise Identify the weaker Relative, not absolute criteria Classical test theory Enough for most medical research Floors or ceilings restrict variance To increase Cronbach’s alpha, avoid low item-total correlations Guesstimate test length with Spearman-Brown formula Factor analysis (exploratory and confirmatory) Are there items that don’t fit the construct? Avoid items that do not load on the main factor See how well a confirmatory model fits Rasch modeling Pick items that fit a carefully considered measurement model Consider item difficulties more deeply Pick items that suit the intended task Informal Formal

More Related