1 / 14

Lecture 9: Test Scoring & Cut Scores

Lecture 9: Test Scoring & Cut Scores. PSY 605. Hopeful results of a selection measure. Test Scores. Hopeful results of a selection measure. ???. ???. Cut Scores. The score(s ) at which the decision changes; often defined & determined as the ‘minimum passing score’

hailey
Download Presentation

Lecture 9: Test Scoring & Cut Scores

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 9: Test Scoring & Cut Scores PSY 605

  2. Hopeful results of a selection measure Test Scores

  3. Hopeful results of a selection measure ??? ???

  4. Cut Scores • The score(s) at which the decision changes; often defined & determined as the ‘minimum passing score’ • “It is the specific score called the cutoff score that creates the two classes of people: those who pass and those who fail. The group who passes rarely sues. Litigation comes from the group failing the test.” (Biddle, 1993, p. 63). • Most discussion & research on cut scores in selection or education testing contexts; ideas & best practices apply to all testing situations

  5. How to determine cut scores?

  6. The Angoff Method (original) • Have SMEs (7-10 if test is for selection/promotion) go through each item with the ‘minimally acceptable person’ (MAP) in mind and judge whether this MAP would get each item correct. • Sum the judged correct items  cut score • Have SME’s judge the probability the MAP would get each item correct. • Sum the probabilities  cut score Angoff (1971)

  7. The Angoff Method (improved) • Prior to Angoff process, establish common frame-of-reference for SME’s regarding the MAP and general difficulty level of items • After 1st round of Angoff process, incorporate the Delphi Technique – have a moderator summarize findings, provide SME’s with these findings and allow any ratings to change before calculating final cut score • Once cut score is estimated from enriched Angoff process, subtract 1-3 standard errors of measurement to account for measurement error • Best for tests with ‘right’ and ‘wrong’ answers Angoff (1971)

  8. Contrasting Groups Method • Identify two groups representing the distinction to be made with test scores • e.g., ‘high performers’ vs. ‘low performers’, ‘masters’ vs. ‘novices’, ‘people who quit within 3 months’ vs. ‘people who stay for 1+ years • Collect & plot test scores for the two groups on separate histograms (when variables normally dist. and sample size is large enough, should resemble bell curve) • Set cut score at point where the two groups’ curves intersect

  9. Contrasting Groups Method : quit within 3 months : stay ≥ 1 yr Cut score for investing in employee development 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Commitment Test Scores

  10. Contrasting Groups Method • Can alter cut score to minimize ‘false positives’ or ‘false negatives’ depending on test purpose • Common issue: the two groups are not distinct enough on test scores • Curves highly overlapping  cut score will lead to many wrong decisions • Advised to supplement cut score method with other methods • Works for all tests that should separate groups

  11. Criterion-related Method • Establish desired cut score in terms of criterion of interest • Using criterion-related validity data, estimate the regression equation: criterion = b0 + b1*(test score) • Plug in desired criterion cut score to calculate corresponding test score • Works for all tests that should predict a criterion Strongly recommended when criterion-related validity is of utmost importance (e.g., in selection situations)

  12. Strategies for encouraging diversity with cut scores • Why not simply set different cut scores for different groups? • Illegal! (And arguably immoral and unethical) – see the 1991 Civil Rights Act • Score banding • Controversal (see Campion et al. 2001) • Treating all test-takers who score within the same ‘band’ as equal and selecting from within a band based on other info.

  13. General guidelines for setting cutoff scores • There is no single ‘best’ method for all tests, all situations • (When used for selection/promotion) should begin with solid job analysis as foundation • (When used for selection/promotion) always keep validity and job-relatedness of testing process high priority • Take base rate info. into account, especially with norm-referenced tests Cascio, Alexander, & Barrett (1988)

  14. General guidelines for setting cutoff scores • When prediction of criteria is main purpose, criterion-related validity information should be carefully considered • Cutoff scores should be set high enough to ensure minimum standards are met • (When used for selection/promotion), cutoff scores should be consistent with ‘normal expectations of acceptable proficiency within the work force’ Cascio, Alexander, & Barrett (1988)

More Related