1 / 16

Using Simulation to evaluate Rasch Models

This paper by John Little from Durham University explores the use of simulation to evaluate Rasch models. It outlines the background, approach, and provides examples of how simulation can be used to gain insights into respondent approaches. The paper concludes that simulation is a simple, powerful, and insightful tool for understanding and analyzing Rasch models.

psimon
Download Presentation

Using Simulation to evaluate Rasch Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Simulation to evaluate Rasch Models John Little CEM, Durham University www.cemcentre.org

  2. Outline • Background • Approach • Uses and Examples • Conclusions

  3. Background • Important questions without easy answers • Simulation is clean and easy • Real data is dirty and unknown • Understanding respondent approaches

  4. Approach • Calculate probabilities • Simulate response as right or wrong • Create dataset • Analyse and compare

  5. Adaptive testing Existing calibrated item bank Initial estimate of ability Suitable question asked Ability estimate updated Stopping criteria checked Ability reported

  6. Adaptive testing Progression of Estimates

  7. Adaptive testing Progression of Estimates

  8. Adaptive testing Item bank restrictions

  9. Adaptive testing Initial ability estimates

  10. Adaptive testing Stopping criteria

  11. Adaptive testing Initial item effects

  12. Adaptive testing Initial item effects

  13. Adaptive testing Initial item effects

  14. Adaptive testing Fit statistics

  15. Further questions • Effect of adding/removing items • The effect of biased or badly calibrated items • Comparison of real & simulated data • Effect of lengthening a test • Different initiation procedures • Effect of guessing • Effect of specific pupil behaviours

  16. Conclusions • Simple • Powerful • Insightful

More Related