1 / 35

“ Computerized Mastery Testing A Testing Architecture ”

“ Computerized Mastery Testing A Testing Architecture ”. F. Jay Breyer, Prometric Bob Riley, NCTRC. Decision Error: The Problem CMT Was Designed to Solve. What is CMT and How Does it Work?. Examinees take the test in stages (separately timed sections):

vita
Download Presentation

“ Computerized Mastery Testing A Testing Architecture ”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “Computerized Mastery TestingA Testing Architecture” F. Jay Breyer, Prometric Bob Riley, NCTRC

  2. Decision Error: The Problem CMT Was Designed to Solve CLEAR 2008 Annual Conference Anchorage, Alaska

  3. What is CMT and How Does it Work? • Examinees take the test in stages (separately timed sections): • Stage 1 is longer than subsequent stages • Stage 2 to the last or kth stage consists of a single packet of test questions called a testlet • Following each stage (except the last) one of three decisions is made: • Pass • Continue • Fail • After the last stage one of two decisions is made: • Pass • Fail CLEAR 2008 Annual Conference Anchorage, Alaska

  4. From Psychometrics: Testlets – Packets of Test Questions • Divide the content of a test into the smallest number of questions or tasks possible so that each • covers the entire test specifications consistently • has the same difficulty • spreads people out similarly • from 10 to 25 questions – 15 questions is most common • Also useful for tasks • All testlets are equal to each other in content, form, and difficulty and have no repeated items or tasks CLEAR 2008 Annual Conference Anchorage, Alaska

  5. From Psychometrics: Testlets – Packets of Test Questions • Build testlets from client full-length test forms • Doesn’t use IRT • Does use empirical Bayes small sample procedures and the psychometrics of testlets • requires a minimum of 125 candidates per original test form • We can start with between 3 to 5 linear CBT forms administered in a single window • Or we can divide up a performance test into comparable sections CLEAR 2008 Annual Conference Anchorage, Alaska

  6. From Manufacturing Engineering Destructive Testing: Sequential Analysis CLEAR 2008 Annual Conference Anchorage, Alaska

  7. From Mathematical Statistics: Bayesian Loss Functions At Each Stage • Calculate a loss value at each raw score for • Passing a nonmaster • Failing a master • Calculate a cost value of exposing an additional testlet at each raw score CLEAR 2008 Annual Conference Anchorage, Alaska

  8. From Mathematical Statistics: Bayesian Loss Functions The Technical Details • There is a science behind this that uses Bayesian loss functions and • Two weights (A:B) symbolizing the client’s perception of the seriousness of making decision error (i.e., 100:50) • I (A) and • II (B) • These two weights implement the client’s view of making a decision error into the cuts used at each stage of testing except the last – the last stage uses the cut for a full-length test CLEAR 2008 Annual Conference Anchorage, Alaska

  9. How Do We Get the Cuts at One Stage of Testing? CLEAR 2008 Annual Conference Anchorage, Alaska

  10. What do the Raw-Score Cuts Look Like in a Program? • With 165-items in a full-length test • With 75 items in the base test • Sequential Testing gives us the concept of testing to a limit and making a decision Stage Of Testing Fail Below Pass At or Above 1 (75 items – 86 min.)* 42 52 2 (90 items – 100 min.) 52 62 3 (105 items – 114 min.) 61 71 4 (120 items – 128 min.) 71 80 5 (135 items – 142 min.) 82 90 6 (150 items – 156 min.) 93 98 7 (165 items – 170 min.) 106 106 * Pretest questions are included here CLEAR 2008 Annual Conference Anchorage, Alaska

  11. A Simulation Result CLEAR 2008 Annual Conference Anchorage, Alaska

  12. Who Gets Longer Tests? • Borderline Examinees CLEAR 2008 Annual Conference Anchorage, Alaska

  13. Who Would Use It? • CMT is useful for clients who • value the time of their candidates and wish to reduce testing time for most examinees • wish to reduce the exposure of their test questions (increase in test security) • results of year 2001 assembly – same as 2006 window • don’t have the money or resources for CAT • CAT is resource intensive – staff, items, examinees • desire a CBT method more powerful than a linear test (CLT) • helps control classification errors • want accurate assignment of candidates to pass/fail status CLEAR 2008 Annual Conference Anchorage, Alaska

  14. Who Should Not Use It? • CMT is not recommended for clients where • small social groups share items with each other • item harvesting is a known issue • group culture studies old test items CLEAR 2008 Annual Conference Anchorage, Alaska

  15. Where Else Can We Apply This? • How about a Performance Test • Imagine a performance test that consists of multiple separate observations of a candidate across different occasions • For example: • 5 separate observations each worth 10 points • The observations are accumulated (summed) • But they are expensive – since you need two observers CLEAR 2008 Annual Conference Anchorage, Alaska

  16. Where Else Can We Apply This? • Let’s imagine that out of a total test worth 50 points • You want to at a minimum make two observations – then a decision • Let’s also assume that the raw cut has been established as 34 out of 50 • About 68% of the points in the test CLEAR 2008 Annual Conference Anchorage, Alaska

  17. A Performance Test Example 1 • With 5 stages and a minimum of 2 observations, and a cut of 34: Stage Of Testing Fail Below Pass At or Above 1 (1st two observations) 10 17 2 (third observation) 17 24 3 (fourth observation) 25 29 4 (Fifth observation) 34 34 CLEAR 2008 Annual Conference Anchorage, Alaska

  18. A Performance Test Example 2 • With 5 stages and a minimum of 2 observations, and a cut of 34: Stage Of Testing Fail Below Pass At or Above 1 (1st two observations) 10 21 2 (third observation) 17 31 3 (fourth observation) 25 41 4 (Fifth observation) 34 34 CLEAR 2008 Annual Conference Anchorage, Alaska

  19. A Performance Test Example 3 • With 5 stages and a minimum of 2 observations, and a cut of 34: Stage Of Testing Fail Below Pass At or Above 1 (1st two observations) 0 17 2 (third observation) 0 24 3 (fourth observation) 0 29 4 (Fifth observation) 34 34 CLEAR 2008 Annual Conference Anchorage, Alaska

  20. Discussion & Reactions CLEAR 2008 Annual Conference Anchorage, Alaska

  21. NCTRC Mission Statement “To protect the consumer of Therapeutic Recreation Services by promoting the provision of quality services offered by NCTRC certificants” CLEAR 2008 Annual Conference Anchorage, Alaska

  22. NCTRC Profile NCTRC was incorporated in 1981 as an independent nonprofit organization • Internationally recognized credentialing body for therapeutic recreation • Accredited in 1993 by National Commission for Certifying Agencies (NCCA) • 15,000 member CTRS Registry • Approximately 1200 exam candidates per year CLEAR 2008 Annual Conference Anchorage, Alaska

  23. NCTRC Board Exam Involvement The NCTRC Board: • Attended a demo of CMT at Prometric HQ • Supported CMT because • Limited exposure of item pool • Overall cost effectiveness • Better value to the candidate • Participated in Cut Score Process • Appoint Exam Management Committee CLEAR 2008 Annual Conference Anchorage, Alaska

  24. NCTRC Testing Program • Began in 1990 with 200 item written exam • Based on Job Analysis of Certified Therapeutic Recreation Specialist (CTRS) • Transformed in 2001 to a Linear Computer Based Exam (200 items) • Transformed in 2002 to a Computer Mastery Exam CLEAR 2008 Annual Conference Anchorage, Alaska

  25. NCTRC Exam • Administered three times each year • 5 day testing window • Conducted at Prometric Testing Centers across the US, Canada and Puerto Rico • Offered to qualified candidates that have been granted professional eligibility by NCTRC CLEAR 2008 Annual Conference Anchorage, Alaska

  26. NCTRC Job Analysis • Assures test specifications and the exam are related to the practice of Therapeutic Recreation • Delineates the important tasks and knowledge deemed necessary for competent practice • Job Tasks - practical experience • Knowledge Areas - theoretical knowledge • Conducted in 1987, 1997, and 2007 CLEAR 2008 Annual Conference Anchorage, Alaska

  27. NCTRC EMC Exam Management Committee Function: • To monitor and make revisions to NCTRC’s testing procedures • To work with and monitor the administration of NCTRC’s tests, such administration may be contracted for with private testing services • To collect data necessary to periodically check for adverse impact or inadvertent bias • To collect data necessary to demonstrate reliability and validity of the testing procedures • To ensure reasonable accommodation of testing procedures for individuals with disabilities CLEAR 2008 Annual Conference Anchorage, Alaska

  28. NCTRC EMC The responsibilities of the Exam Management Committee include: • Item writing committee • Item review committee • Updating and maintaining exam reference list • Review current items in operational pool for overlap and currency • Job analysis • Update and maintain practice tests CLEAR 2008 Annual Conference Anchorage, Alaska

  29. Customer Preparation • Certification standards • Conference workshops • Exam content outline • Practice exam • Sample items • Reference list CLEAR 2008 Annual Conference Anchorage, Alaska

  30. NCTRC CMT • Base test consists of 90 multiple choice items (87 minutes) • 15 items are pre-test items that are not part of score • Depending on performance candidate can receive up to 6 additional testlets (14 minutes) • One testlet equals15 items • Each testlet is mirror reflection of Exam Content Outline (same proportions as full exam) CLEAR 2008 Annual Conference Anchorage, Alaska

  31. NCTRC Exam Content Outline CLEAR 2008 Annual Conference Anchorage, Alaska

  32. NCTRC Exam Experience • Immediate feedback • Faster test time • Candidate satisfaction • Some dissatisfaction and confusion • Less exposure to item pool via random assignment of testlets • Computer-base exam a “plus” with candidates • Positive feedback re: NCTRC prep material CLEAR 2008 Annual Conference Anchorage, Alaska

  33. NCTRC Special Accommodations • NCTRC approval process • Relatively large percent of special accommodations • Advanced registration with designated reservation • Ability to offer a wide range of accommodations • CBT and CMT conducive to special needs CLEAR 2008 Annual Conference Anchorage, Alaska

  34. Speaker Contact Information F. Jay Breyer, Ph.D. Executive Director of Psychometric Consulting Services Prometric 2000 Lenox Drive Lawrenceville, NJ 08648 e-mail: jay.breyer@prometric.com CLEAR 2008 Annual Conference Anchorage, Alaska

  35. Speaker Contact Information Bob Riley, Ph.D., CTRS NCTRC Executive Director 7 Elmwood Dr New City, NY 10956 briley@nctrc.org CLEAR 2008 Annual Conference Anchorage, Alaska

More Related