1 / 40

Item Response Theory and Longitudinal Modeling: The Real World is Less Complicated than We Fear

Item Response Theory and Longitudinal Modeling: The Real World is Less Complicated than We Fear. Marty McCall Northwest Evaluation Association Presented to the MSDE/MARCES Conference ASSESSING & MODELING COGNITIVE DEVELOPMENT IN SCHOOL: INTELLECTUAL GROWTH AND STANDARD SETTING

nguyet
Download Presentation

Item Response Theory and Longitudinal Modeling: The Real World is Less Complicated than We Fear

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Item Response Theory and Longitudinal Modeling: The Real World is Less Complicated than We Fear Marty McCall Northwest Evaluation Association Presented to the MSDE/MARCES Conference ASSESSING & MODELING COGNITIVE DEVELOPMENT IN SCHOOL: INTELLECTUAL GROWTH AND STANDARD SETTING October, 19, 2006

  2. Examining constructs through vertical scales • What are vertical scales? • Who uses them and why? • Who doesn’t use them and why not?

  3. What are vertical scales? • In the IRT context, they are: • scales measuring a construct from the easiest through the most difficult tasks • equal interval scales spanning ages or grades • also called developmental scales • a common framework for measurement of a construct over time

  4. Why use vertical scales? • To model growth: Tests that are vertically scaled are intended to support valid inferences regarding growth over time. --Patz, Yao, Chia, Lewis, & Hoskins (CTB/McGraw)

  5. Why use vertical scales? • To study cognitive changes: When people acquire new skills, they are changing in fundamental interesting ways. By being able to measure change over time it is possible to map phenomena at the heart of the educational enterprise. --John Willet

  6. Who uses vertical scales? • CTB McGraw • TerraNova Series • Comprehensive Test of Basic Skills (CTBS) • California Achievement Test • Harcourt • Stanford Achievement Test • Metropolitan Achievement Test • Statewide NCLB tests • All states using CTB or Harcourt’s tests • Mississippi, North Carolina, Oregon, Idaho • Woodcock cognitive batteries

  7. Development and use Note that many of these scales were developed prior to NCLB and before cognitive psychology had gained currency. Achievement tests began in an era of normative interpretation. Policymakers are now catching up to content and standards-based interpretations.

  8. Assumptions-implicit and explicit The construct is a unified continuum of learning culminating in mature expertise Domain coverage areas are not necessarily statistical dimensions Scale building models the sequence of skills and the relationship between them The construct embodies a complex unidimensional ability

  9. What? The construct embodies a complex unidimensional ability A mature ability such as reading or doing algebra problems involve many component skills. The ability itself is unlike any of its component skills. Complex skills are emergent properties of simpler skills and in turn become components of still more complex skills

  10. Who doesn’t use vertical scales?Why not? In recent years, there have been challenges to the validity of vertical scales. Much of this comes from the viewpoint of standards-based tests including those developed for NCLB purposes. Many critical studies use no data or data simulated to exhibit dimensionality.

  11. Assumptions-implicit and explicit Subject matter at each grade forms a unique construct described in content standards documents. Topics not explicitly covered in standards documents are not tested. Content categories represent independent or quasi-independent abilities.

  12. How assumptions affect vertical scaling issues Cross-grade linking blocks detract from grade-specific content validity Changes in content descriptions indicate differences in dimensionality for different grades Vertical linking connects unlike constructs to a scale that may be mathematically tractable but lacks validity

  13. Vertical scale critics ask: “How can you put unlike structures together and expect to get meaningful scores and coherent achievement standards?” Vertical scale proponents ask: “If you believe the constructs are different how can you talk about change over time? Without growth modeling how can you get coherent achievement standards?”

  14. Criticism centers on two major issues • Linking error • Violations of dimensionality assumptions

  15. Issue #1: Linking creates error There is some error associated with all measurement, but current methods of vertical scaling greatly minimize it. These methods include: --triangulation with multiple forms or common person links --comprehensive and well-distributed linking blocks --continuous adjacent linking --fixed parameter linking in adaptive context

  16. How do people actually create and maintain vertical scales? • Harcourt – common person for SAT and comprehensive linking blocks • CTB – methods include concurrent calibration, non-equivalent anchor tests (NEAT), innovative linking methods • ETS – (the king of NEAT) – also uses an integrated IRT method (Davier & Davier)

  17. How do we do it? Scale establishment method extensively described in Probability in the Measurement of Achievement By George Ingebo

  18. How do we do it?Extensive initial linking A 1 B 2 3 4 1 3 2 4 3 2 C D 3 4

  19. Adaptive Continuous Vertical Linking Benchmark X Benchmark X +1

  20. Issue #2Dimensionality Reading and mathematics at grade 3 looks very different than those subjects at grade 8. In addition, the curricular topics differ at each grade. How can they be on the same scale?

  21. Study of Dimensionality: Research Questions 1. Does essential unidimensionality hold throughout the scale? 2. Do content areas within scales form statistical dimensions?

  22. Does essential unidimensionality • hold throughout the scale? • Examine a set of items that comprised forms for state tests in reading and mathematics in grades 3 through 8 • Use Yen’s Q3 statistic to assess dimensionality for an exploratory dimensionality study

  23. Basic concept: When the assumption of unidimensionality is satisfied, responses exhibit local independence. That is, when the effects of theta are taken into account, correlation between responses is zero. Q3 is the correlation between residuals of response pairs.

  24. dik is the residual: where: uik is the score of the kth examinee on the ith item Pi(qk) is as given in the Rasch model:

  25. The correlation taken over examinees who have taken item i and item j is: Fishers r to z’ transformation gives a normal distribution to the correlations: Q3 values tend to be negative (Kingston & Doran)

  26. Pairs of responses from adaptive tests – NWEA’s Measures of Academic Progress Over 49 million response pairs per subject

  27. 2. Do content areas within scales form statistical dimensions? Used method from Bejar (1980). “A procedure for investigating the unidimensionality of achievement tests based on item parameter estimates” J of Ed Meas, 17(4), 283-296 Calibrate each item twice; once, using responses to all items on the test (the usual method); again using only responses to items in the same goal area.

  28. 2. Do content areas within scales form statistical dimensions? Data is from fixed form statewide accountability test of reading and mathematics.

  29. What we have found regarding skill development: • New topics build on earlier ones and show up statistically as part of the construct • Although they may not be specified in later standards, early topics and skills are embedded in later ones (e.g., phonemics, number sense) • Essential unidimensionality (Stout’s terminology) holds throughout the scale with minor dimensions of interest

  30. Thank you for your attention. Marty McCall Northwest Evaluation Association5885 SW Meadows Road, Suite 200Lake Oswego, Oregon   97035-3256Phone:  503-624-1951FAX:  503-639-7873Marty.McCall@nwea.org

More Related