1 / 86

A Closer look at Computer Adaptive Tests (CAT) and Curriculum-Based Measurement ( CBM) —

A Closer look at Computer Adaptive Tests (CAT) and Curriculum-Based Measurement ( CBM) — M aking RTI progress monitoring more manageable and effective.

chesna
Download Presentation

A Closer look at Computer Adaptive Tests (CAT) and Curriculum-Based Measurement ( CBM) —

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Closer look at Computer Adaptive Tests (CAT) and Curriculum-Based Measurement (CBM)— Making RTI progress monitoring more manageable and effective. Dr. Edward S. Shapiro, Director, Center for Promoting Research to PracticeLehigh UniversityBethlehem, PACESA #4 and Renaissance LearningWest Salem, WI, December 5, 2012

  2. Why is Lehigh known?

  3. Why Lehigh is Known?

  4. Big Picture and Key Points • RTI Self-Assessment at School Level • RTI and Assessment Components • Universal Screening • Progress Monitoring • RTI and Curriculum-Based Measurement (CBM) • RTI and Computer Adaptive Testing (CAT) • Some case examples from CAT

  5. RTI Self-Assessment • Complete self-assessment at school level • Report out group readiness • Next steps to implementation?

  6. RTI Represents Systems Change • RTI aligns with the school improvement process • RTI is: • A dramatic redesign of general and special education • A comprehensive service delivery system that requires significant changes in how a school serves all students NASDE, 2006

  7. Wisconsin Vision of RTI

  8. National Perspective • 1,390 respondents (K-12 administrators) to survey (margin of error 3-4% AT 95% confidence interval) • 94% of districts are in some stage of implementing RTI – up from 60% in 2008 and 44% in 2007 • Only 24% of districts reached full implementation • Primary implementation is elementary level with reading leading the way • www.spectrumk12.com

  9. National Perspective on RTI • www.spectrumk12.com

  10. Two Key Assessment Processes in RTI • Universal Screening • Progress Monitoring

  11. Standards Aligned System-Balanced Assessment • Wisconsin Balanced Assessment Recommendations within RTI

  12. Formative Assessment A planned process Used to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes Classroom-based Formal and Informal Measures Diagnostic - Ascertains, prior to and during instruction, each student’s strengths, weaknesses, knowledge, and skills to inform instruction.

  13. Benchmark Assessment Provides feedback to both the teacher and the student about how the student is progressing towards demonstrating proficiency on grade level standards.

  14. Summative Assessment Seeks to make an overall judgment of progress made at the end of a defined period of instruction. Often used for grading, accountability, and/or research/evaluation

  15. Universal Screening Process- A Benchmark Assessment Process What is Universal Screening? • Administered to all students at all levels, K-12 • Universal screening is a process that includes assessments, but also includes record review and historical information • Brief measure • Its use is primarily to determine who might be at-risk • Some screeners can do more

  16. Reviewing the data • Universal screening data are typically collected in the fall, winter, and spring. • Key questions • Identify how the group is doing as a whole • Determine who is individually in need of intervention beyond core instruction • Some screeners can give us info about how to focus instruction

  17. Potential Choices of Measures • National RTI Center Tools Chart • Two types of measures • Curriculum-Based Measurement • Benchmark, Summative • Computer Adaptive Tests • Benchmark, Formative, Summative

  18. CBM and Assessment • CBM designed as INDEX of overall outcomes of academic skills in domain • CBM is a General Outcomes Measure • Tells you HOW student is doing OVERALL, not specifically what skills they have and don’t have (not formative or diagnostic)

  19. McDonald’s- How Do We Know They Are Doing Well as a Company • General Outcomes Measure of company’s success • What is the one item that tells the CEO and stock holders how they are doing?

  20. General Outcomes Measures- Examples • The medical profession measures height, weight, temperature, and/or blood pressure. • Companies report earnings per share. • Wall Street measures the Dow-Jones Industrial Average. • General Outcomes approach for reading measures Oral Reading Fluency

  21. Characteristics of CBM • Standardized format for presentation • Material chosen is controlled for grade level difficulty • Material presented as brief, timed probes • Rate of performance used as metric • Results provide index of student progress in instructional materials over time • Indexes growth toward long-term objectives • Measures are not designed to be formative or diagnostic

  22. Characteristics of CBM Measures • Can be used in formative way through error analysis , but that was not their design • Overall Reading Performance = Oral Reading Fluency (primary measure) • Early Literacy Measures = Phonics/Alphabetic Principles • Math = Computational objectives • Math = Concepts/applications of mathematics

  23. CBM and Reading Assessment Measures • Early Literacy • Phoneme Segmentation Fluency • Initial Sound Fluency • Nonsense Word Fluency • Letter Identification Fluency • Reading • Oral Reading Fluency • Maze • Retell Fluency • AIMSweb as example

  24. Types of CBM Math Assessment • M-COMP = Computation Skills • Assesses many skills across the grade • Samples the skills expected to be acquired • Grade-based assessment • Reflects performance across time • M-CAP = Concepts/Applications Skills

  25. Example of MCOMP & MCAP Measures • Grade 3 MCOMP Example • Grade 5 MCOMP Example • Example of MCAP – Grade 3 • Example of MCAP – Grade 5

  26. AIMSweb – MCOMP Domains Assessed

  27. AIMSweb- MCAP Domains Assessed

  28. Time Limits

  29. R-CBM Screening- Grade 3

  30. R-CBM Screening • Instructional Recommendations • Link to Lexile Level and Instructional Level Book Recommendations (Gr 3, Lawnton-Scores & Percentiles) • Prediction to state test also available • Links to Common Core also reported

  31. Data Outcomes and Interpretation • At each grade, one identifies the distribution of students at each level of risk, as defined by the user • Data used by data team to identify students in need of supplemental instruction • Data reflects change in GROUPS over time

  32. Exercise 1 for Groups • Show data for school- Use RCBM • Have groups interpret the outcomes • Use data from CD as example • Extract grade 2 and 3 data, Winter only. Have the groups identify goals for winter. • Then show the Winter to spring data and have groups draw conclusions about the data.

  33. Keys to Interpretation of CBM Data • Change over time interpreted differently for reading and math • Change from end of one year to start of next (summer decline?) • Implications for instruction?

  34. AIMSweb RCBM Across Time

  35. MCOMP Across Grades, Time by Percentiles

  36. MCAP Across Grades and Time by Percentiles

  37. Some Key Elements of Interpreting AIMSweb CBM • Within and across grade growth is evident for reading (RCBM) but not math • Across grade growth in reading shows step wise improvements, after “summer decline” • In math, within year change over the year can be very small • Across grade growth in math not possible to determine from math CBM, i.e., each grade is not necessarily higher scoring than the previous grade • Interpretation within grade rather than across grade is stronger • Why? Due to nature of within grade measures- Math measures are more specific skills probes than general outcome measures

  38. Computer Adaptive Tests

  39. What are Computer Adaptive Tests? • Based on IRT (Item Response Theory) method of test construction • Adjusts items administered based on student responses and difficult of items • Tests have huge item banks • Items are not timed, based on accuracy of response • Careful calibration, pinpoints skills acquired and in need of teaching in a skill sequence

  40. CAT Methods and Measures • Computer administered entirely • Between 15-25 minutes per administration • Skills focused within domains • Not all students take same items, depends on which items are answered correctly and incorrectly • Scaled Score is the KEY metric

  41. CAT Methods and Measures • Provides a student’s relative standing to peers in on a national distribution • Provides student’s goals for growth • Provides indication of group’s performance (grade, school, district) relative to what is expected nationally • Example for today- STAR Assessment (Enterprise) from Renaissance Learning • Other similar metrics exist, see NCRTI charts • Study Island, SRI, MAP

  42. STAR Assessments • STAR Early Literacy (pre-K - 3) • STAR Reading (Gr 1 – 12) • STAR Math (Gr 1 – 12)

  43. STAR Scaled Score - Critical • Metric that places student on a distribution from K through grade 12 • Weight analogy • STAR Scaled Score • Early Literacy (PreK – 3) 300 – 900 • Reading (K-12) – 0 to 1400 • Math (1 – 12) – 0 to 1400 • Note important difference in interpretation to CBM (AIMSweb) measures across grades and time

  44. STAR Reading Scaled Scores

  45. STAR Reading Scaled Score Across Time and Grades

  46. AIMSweb RCBM Across Time

  47. STAR Math Scaled Score

More Related