1 / 50

Benchmark Advisory Test (BAT) Update

Benchmark Advisory Test (BAT) Update. BILC Conference Athens, Greece Dr. Ray Clifford and Dr. Martha Herzog 22-26 June 2008. This Update Has Four Parts. Why we began the BAT project. The role of proficiency standards. Why the BAT follows a construct-based, evidence-centered test design.

elysia
Download Presentation

Benchmark Advisory Test (BAT) Update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Benchmark Advisory Test (BAT) Update BILC Conference Athens, Greece Dr. Ray Clifford and Dr. Martha Herzog 22-26 June 2008

  2. This Update Has Four Parts • Why we began the BAT project. • The role of proficiency standards. • Why the BAT follows a construct-based, evidence-centered test design. • When will BAT be available.

  3. Why we began the BAT project. • 2006 A survey was conducted on the desirability of a BILC-sponsored, “benchmark” test with advisory ratings.

  4. Participation in the Survey • 16 countries responded to the survey: Austria Bulgaria Canada Denmark Estonia Finland Germany Hungary Italy Latvia Lithuania Poland Romania Spain Sweden Turkey

  5. Survey Results • Would your country use a Benchmark Test if one were available? Definitely yes: 8 Probably yes: 5 Perhaps: 2 Most likely not: 0 Definitely not: 1

  6. Survey Results • Does your country use “plus levels” when assigning STANAG ratings? Definitely yes: 3 Probably yes: 0 Perhaps: 1 Most likely not: 1 Definitely not: 11

  7. Survey Results • Would you like to have plus levels incorporated into a Benchmark Test? Definitely yes: 5 Probably yes: 5 Perhaps: 2 Most likely not: 2 Definitely not: 2

  8. Conclusions • A “benchmark” test would be welcomed by most countries. • (The scores should be advisory in nature.) • Providing “plus” level ratings would allow greater fidelity in making comparisons. • BILC should proceed with plans to: • Develop a benchmark STANAG test of reading comprehension. • Explore internet delivery options.

  9. The Role of Proficiency Standards Dr. Martha Herzog BILC Athens, Greece 22-26 June 2008

  10. TRAINING IN TEST DESIGN LANGUAGE TESTING SEMINAR 20 Iterations 265 participants 38 nations 4 NATO officers Facilitators from 10 nations

  11. BENCHMARK TESTS Tests of all four skills Measures Level 1 through Level 3

  12. STANDARDS All standards have three components Content Tasks Accuracy

  13. TEAMWORK The Working Group functions as a team 13 members from 8 nations Contributions from many other nations

  14. Summary STANDARDS TRAINING IN TEST DESIGN TEAMWORK TECHNOLOGY

  15. Why does the BAT follow a construct-based, evidence-centered test design?

  16. Why does the BAT follow a construct-based, evidence-centered test design? Because a CBT, ECD design solves a major problem encountered when testing proficiency in the receptive skills, i.e. in testing Reading and Listening.

  17. In contrast to traditional test development procedures, CBT allows direct (rather than indirect application) of the STANAG 6001 Proficiency Scales to the development and scoring of Reading and Listening Proficiency Tests.

  18. Test Development Procedures: Norm-Referenced Tests • Create a table of test specifications. • Train item writers in item-writing techniques. • Develop items. • Test the items for difficulty, discrimination, and reliability by administering them to several hundred learners. • Use statistics to eliminate “bad” items. • Administer the resulting test. • Report results compared to other students.

  19. Test Development Procedures: Norm-Referenced Tests (cont.) • Each test administration yields a total score. • However, setting “cut” scores or “passing” scores on norm-referenced tests is a major challenge. • And relating scores on norm-referenced tests to a polytomous set of criteria (such as levels in the STANAG 6001 or other proficiency scales) is even more problematic.

  20. A Traditional MethodOf Setting Cut Scores Level3Group Groups of ”known” ability Test to be calibrated Level2Group Level1Group

  21. The Results One Hopes For: Distinct “cut” scores between the scores of the calibration groups Level3Group Groups of “known” ability Level2Group Level1Group

  22. The Results One Always Gets: Level3Group Groups of ”known” ability Level2Group Bands of Overlapping Test Scores Level1Group

  23. No matter where the cut scores are set, they are wrong for someone. Level3Group Groups of ”known” ability Level2Group Where in the overlapping range should the cut score be set? Level1Group

  24. Why is this “overlap” in scores always present? • A single test score on a multi-level test… • Gives equal credit for every right answer regardless of its proficiency level. • Camouflages by-level abilities. • Is a “compensatory” score. • Proficiency abilities… • Are by definition “non-compensatory”. • Require demonstration of sustained ability at each level.

  25. A Better Test Design:Construct-Based Proficiency Testing • Uses a “floor” and “ceiling” approach similar to that used in Speaking and Writing tests. • The proficiency rating is assigned based on two separate scores: • A “floor” proficiency level of sustained ability across a range of tasks and contexts specific to that level. • A “ceiling” proficiency level of non-sustained ability at the next higher proficiency level.

  26. Therefore Construct-Based Testing • Tests each proficiency level separately. • Three tests for levels 1 through 3. • Or three subtests within a longer test. • Rates each level-specific test separately. • Applies the “floor” and “ceiling” criteria used in rating productive skills using a scale such as: • Sustained (consistent evidence) = 70% to 100% • Developing (present, inconsistent) = 55% to 65% • Emerging (some limited evidence) = 40% to 50% • Random (no visible evidence) = 0% to 35%

  27. Does it make a difference? Consider the following example.

  28. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  29. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  30. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  31. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  32. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  33. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  34. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  35. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  36. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  37. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  38. A Total Score (where 195=Level 1)Versus Construct-Based Scoring

  39. Scores on Construct-Based Tests are: valid,easily explained,andinformative!But how is a CBT developed?

  40. Test Development Procedures: Construct-BasedProficiency Tests • Define each proficiency level as a construct to be tested. • Follow a construct-based, evidence-centered test design. • Train item writers • In the proficiency scales. • In matching text types to the tasks in the scales. • In item writing.

  41. Test Development Procedures: Construct-BasedProficiency Tests • Develop items that exactly match all of the specifications for each level in the proficiency scale, with... • Examinee task aligned with the author’s [or the speaker’s ] purpose. • Level-appropriate topics and contexts.

  42. Test Development Procedures: Construct-Based ProficiencyTests • Use “alignment”, “bracketing”, and “modified Angoff” item review and quality control procedures. • A specifications review to insure alignment of author purpose, text type, and reader task. • A bracketing review to check the adequacy of the item’s response options for test takers at higher and at lower proficiency levels. • Modified Angoff ratings of item difficulty for “at-level” test takers to set passing levels.

  43. Test Development Procedures: Construct-Based ProficiencyTests • Use data from the Angoff reviews to define “sustained ability” for each level of the test. • Assemble the “good” items into level-specific tests or subtests. • Do validation testing. • Use statistical analyses to confirm reviewer ratings.

  44. Test Development Procedures: Construct-Based ProficiencyTests • Replace items that do not “cluster” or act like the other items at each level. • Score and report results for each level using “sustained” proficiency criteria. • Continue to build the item data bases to enable: • Random selection of test items for multiple forms. • Computer adaptive testing.

  45. What do the results of a CBT Reading proficiency look like? Here are some initial results from the BAT English Reading Proficiency Test.

  46. Results on the Level 1 Test Results on the Level 2 Test Results on the Level 3 Test

  47. When will BAT be available? • Funds have been set aside for administering and scoring of 200 free advisory tests. • All four skills will be tested. • BAT Reading, Listening, and Writing tests will be online tests. • The Speaking test will be conducted over the telephone. • These tests are to be used in test norming or calibration studies.

  48. When will BAT be available? • We anticipate the following timeline: • About October, 2008. Directions on how to apply will be sent out. • About November, 2008. Applications will be submitted. • About December, 2008. Applications will be reviewed and decisions made about how the 200 tests will be allocated. • Between February and June. The first round of advisory testing will be conducted.

  49. When will BAT be available? • More specific information will be sent out after consultation with ACT.

  50. Are there any questions? ? ? ?

More Related