1 / 28

Item Analysis: A Crash Course

Item Analysis: A Crash Course. Lou Ann Cooper, PhD Master Educator Fellowship Program January 10, 2008. Validity. Validity refers to “the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.” “Validity is an integrative summary.” (Messick, 1995)

teague
Download Presentation

Item Analysis: A Crash Course

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Item Analysis: A Crash Course Lou Ann Cooper, PhD Master Educator Fellowship Program January 10, 2008

  2. Validity • Validity refers to “the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.” • “Validity is an integrative summary.” (Messick, 1995) • Validation is the process of building an argument supporting interpretation of test scores. (Kane, 1992)

  3. Reliability Consistency, reproducibility, generalizability Very norm-referenced – relative standing in a group Onlyscores can be described as reliable, not tests. Reliability depends on Test Length – number of items  Sample of test takers – group homogeneity  Score range  Dimensionality – content and skills tested

  4. Planning the Test • Test blueprint / table of specifications  Content, skills, domains  Level of cognition  Relative importance of each element • Linked to learning objectives. • Provides evidence for content validity.

  5. Test Blueprint: Third Year Surgery Clerkship Content

  6. Test Statistics • A basicassumption: items measure a single subjectarea or underlying ability. • General indicator of test quality is a reliability estimate. • The measure most commonly used to estimate reliability in a single administration of a test is Cronbach's Alpha. Measure of internal consistency.

  7. Cronbach’s alpha Coefficient alpha reflects three characteristics of the test: • The interitem correlations -- the greater the relative number of positive relationships, and the stronger those relationships are, the greater the reliability. Item discrimination indices and the test's reliability coefficient are related in this regard. • The length of the test -- a test with more items will have a higher reliability, all other things being equal. • The content of the test -- generally, the more diverse the subject matter tested and the testing techniques used, the lower the reliability. Where Total test variance = the sum of the item variances + twice the unique covariances

  8. Descriptive Statistics • Total test score distribution  Central tendency  Score Range  Variability • Frequency distributions for individual items – allows us to analyze the distractors.

  9. Human Behavior Exam Mean = 75.98 (6.78) Median = 77 Mode = 72

  10. Item Statistics • Response frequencies/distribution • Mean • Item variance/standard deviation • Item difficulty • Item discrimination

  11. Item Analysis Examines responses to individual test items from a single administration to assess the quality of the items and the test as a whole. Did the item function as intended? Were the test items of appropriate difficulty? Were the test items free from defects? Technical Testwiseness Irrelevant difficulty Was each of the distractors effective?

  12. Item Difficulty • For items with one correct answer worth a single point, difficulty is the percentage of students who answer an item correctly, i.e. item mean. • When an alternative is worth other than a single point, or when there is more than one correct alternative per question, the item difficulty is the average score on that item divided by the highest number of points for any one alternative. • Ranges from 0 to 1.00 - the higher the value, the easier the question.

  13. Item Difficulty • Item difficulty is relevant for determining whether students have learned the concept being tested. • Plays an important role in the ability of an item to discriminate between students who know the tested material and those who do not. • To maximize item discrimination, desirable difficulty levels are slightly higher than midway between chance and perfect scores for the item.

  14. Ideal difficulty levels for MCQ Lord, F.M. "The Relationship of the Reliability of Multiple-Choice Test to the Distribution of Item Difficulties," Psychometrika, 1952, 18, 181-194

  15. Item Difficulty Assuming a 5-option MCQ, rough guidelines for judging difficulty: ≥ .85 Easy > .50 and < .85 Moderate < .50 Hard

  16. Item Discrimination Ability of an item to differentiate among students on the basis of how well they know the material being tested. Describes how effectively the test item differentiates between high ability and low ability students. All things being equal, highly discriminating items increase reliability.

  17. Discrimination Index D = pu - pl pu = proportion of students in the upper group who were correct. pl = proportion of students in the lower group who were correct. D .40 satisfactory item functioning .30  D  .39 little or no revision required .20  D  .29 marginal - needs revision D < .20 eliminate or complete revision

  18. Point biserial correlation Correlation between performance on a single item and performance on the total test. -High and positive: best students get the answer correct; poorest students get it wrong. -Low or zero: no relationship between performance on the item and the total test. -High and negative: Poorest students get the item correct; best get it wrong.

  19. Point biserial correlation • rpbis tends to be lower for tests measuring a wide range of content areas than for more homogeneous tests. • Items with low discrimination indices are often ambiguously worded. • A negative value may indicate that the item was miskeyed. • Tests with high internal consistency consist of items with mostly positiverelationships with total test score.

  20. Item Discrimination Rough guidelines for rpbis > .30 Good >.10 and < .30 Fair < .10 Poor

  21. Item Analysis Matrix

  22. ITEM 1 ITEM 2

  23. ITEM 3 ITEM 4

  24. A Sample of MS1 Exams

  25. Cautions • Item analyses reflect internal consistency of items rather than validity. • The discrimination index is not always a measure of item quality • Extremely difficult or easy items will have low ability to discriminate but such items are often needed to adequately sample course content and objectives. • An item may show low discrimination if the test measures many different content areas and cognitive skills.

  26. Cautions • Item analysis data are tentative. Influenced by:  type and number of students being tested •  instructional procedures employed •  both systematic and random measurement error • If repeated use of items is possible, statistics should be recorded for each administration of each item.

  27. Recommendations Valuable tool for improving items to be used in future tests – item banking.  Modify or eliminate ambiguous, misleading, or flawed items.  Helps improve instructors’ skills in test construction.  Identifies specific areas of course content which need greater emphasis or clarity.

  28. Research Downing SJ. The effects of violating standard item writing principles on tests and students: The consequences of using flawed items on achievement examinations in medical education. Adv Health Sci Educ 10:133-143, 2005. Jozefowicz RF et al. The quality of in-house medical school examinations. Acad Med 77(2):156-161, 2002. Muntinga JH, Schull HA. Effects of automatic item eliminations based on item test analysis. Adv Physiol Educ 31: 247-252, 2007.

More Related