1 / 71

Monitoring Student Progress to Develop Standards-Based IEPs

Monitoring Student Progress to Develop Standards-Based IEPs. OSEP GSEG Project Manager’s Meeting Gerald Tindal, Ph. D University of Oregon Martin Ikeda, Ph. D. Iowa Department of Education. Overview. Intent is to provide an overview of some ideas we have been thinking about

rickybishop
Download Presentation

Monitoring Student Progress to Develop Standards-Based IEPs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Monitoring Student Progress to Develop Standards-Based IEPs OSEP GSEG Project Manager’s Meeting Gerald Tindal, Ph. D University of Oregon Martin Ikeda, Ph. D. Iowa Department of Education

  2. Overview • Intent is to provide an overview of some ideas we have been thinking about • Jerry has researched CBM for 25 years + • Marty has supported implementation for 12 years + • Conversational presentation-take comments and questions as we go • Representative of our best-thinking to-date

  3. “Big Ideas” • CBM is a viable tool for decision making about participation in the alternate assessment against modified academic achievement standards • CBM is a viable tool for decision making about progress against grade level standards

  4. Issues around 2%: • IEP Participation Decision: • Objective evidence demonstrating that the student’s disability has precluded the student from achieving grade-level proficiency in the content area assessed. • The IEP team is reasonably certain that, even if significant growth occurs, the student is not likely to achieve grade level proficiency within the year covered by the IEP • IEP Development Issues • IEP goals based on State grade level academic content standards • Means for an annual determination of progress

  5. Our interpretation • Need a way to operationalize proficient performance on grade level content standards • State Test: 1X/year depiction • Need a way to predict if the child can realistically achieve grade level proficiency within one year • “1-2 years behind?” • Need a way to monitor performance toward the operationalization

  6. What we want: • Align decisions about participation, present levels of academic achievement and functional performance and IEP goals referencing grade-level proficiency • Assess progress more frequently than annually, so that instructional effects can be assessed and changes made to programs if needed

  7. Consider Golf • Many Components of a Good Golf Game • Grip • Choosing the Correct Club • Backswing & Follow Through • Putting Skill • General Outcome Measure for Golf • Number of Strokes

  8. Curriculum-Based Measures: A potential solution • CBM is a validated technique for a variety of decisions, but particularly for monitoring performance over time • General Outcomes • Brief • Repeatable • Sensitive to Changes in Performance over Time • Operationalize content standards at grade level (ambitious) • Support instructional decision-making

  9. Jacob: Grade 5 • Grade level proficiency standard: • 75 wpm local norm • 100 wpm published performance level • Jacob: 25 wpm in Grade level material • Problem?

  10. 90th percentile 75th percentile 50th percentile 25th percentile 10th percentile Jacob Score Illustration: Jacob • Examination of performance against other 5th graders in the district (local norm) • Data generated during Spring for fifth graders on Grade 5 material • In the Fall, Jacob would be given probes from Grade 5 from year’s end material 150 140 130 Oral 120 Reading 110 Fluency 100 90 Jacob's 80 Performance 70 Compared to Peers 60 50 40 30 20 5 10

  11. What are realistic growth rates in reading? • Grade 1: 2 words correct/week • Grade 2: 1.5 words correct/week • Grade 3: 1 word correct/week • Grade 4: .85 words correct/week • Grade 5: .50 words correct/week • Grade 6: .30 words correct/week • It may be difficult for Jacob to “catch up” by year’s end. The IEP team might decide he is a candidate for the Alternate Assessment against Modified Academic Achievement Standards.

  12. 90th percentile 75th percentile 50th percentile 25th percentile 10th percentile Juarez Score Illustration: Juarez • In the Fall, on year end material, Juarez is reading 80 wpm. • Students reading at this rate are getting meaning from text • They may not be reading fluently enough to earn a proficient score on the AYP test • However, Juarez is performing within grade level & is likely to “catch up” by year’s end • It would be defensible for the IEP team to conclude that Juarez is not a candidate for the alternate assessment against modified academic achievement standards and instead participate in the general assessment with accommodations 150 Oral 140 Reading 130 Fluency 120 110 Juarez’ 100 Performance 90 Compared to Peers And Performance Standard 80 70 60 50 40 30 20 5 10

  13. CBM and Participation Decisions • Potentially useful framework • Grade Level Proficiency • Projected Growth • Establishing alignment between the CBM metric, Grade Level Content Standards, and Grade Level Proficiency

  14. Alternate Assessments based on Modified Academic Achievement Standards Connecting CBM in Reading with Grade Level Standards Gerald Tindal University of Oregon

  15. Alternate Forms • Progress monitoring requires alternate forms to allow meaningful interpretation of student data across time. Without such cross-form equivalence, changes in scores from one testing session to the next are difficult to attribute to changes in student skill or knowledge. • As student reading skills progresses through the different skill areas in the broad construct of reading, it is necessary to use different reading measures to be able to continue to track the progress students are making as developing readers

  16. Technical Reports • Alonzo, J. & Tindal, G. (2007). The Development of Early Literacy Measures for use in a Progress Monitoring Assessment System: Letter Names, Letter Sounds, and Phoneme Segmenting. Technical Report # 39. University of Oregon, Eugene: Behavioral Research and Teaching. • Alonzo, J. & Tindal, G. (2007). The Development of Word and Passage Reading Fluency Measures for use in a Progress Monitoring Assessment System. (Technical Report # 40). University of Oregon, Eugene: Behavioral Research and Teaching. • Alonzo, J., Liu, K., & Tindal, G. (2007). Examining the Technical Adequacy of Reading Comprehension Measures in a Progress Monitoring Assessment System. (Technical Report #41). University of Oregon, Eugene: Behavioral Research and Teaching.

  17. Design of Alternate Measures • Defined universe of items in a pilot • Used common items and nonequivalent groups design • Scored tests at the item level • Reassembled items for equivalent forms

  18. Distribution of the Measures Across the Grades

  19. Data Analyses • One-parameter Rasch model • Estimates the difficulty of individual test items and the ability level of each individual test taker • Standard error of measure • Mean square outfit to evaluate goodness of fit (values in the range of 0.50 to 1.50)

  20. Letter Names, Sounds, Segmenting • 16 letter names exceeded mean sq outfit of 1.5 but were included given low SEM-3 letters found to not fit (g, H, and Y) • 16 letter sounds exceeded mean sq outfit of 1.5 but were included given low SEM-6 letter sounds found to not fit (B, C, d, j, p, and Qu) • A total of 181 words used in segmenting remained in the item bank

  21. Word Reading Fluency • Tests students’ ability to read both sight-words and words following regular patterns of letter/sound correspondence in the English language • Students are shown a series of words organized in a chart on one side of a single sheet of paper and given a set amount of time (30-60 seconds) • The words we used during the pilot study came from a variety of sources: Dolch word lists, online grade-level word lists, and a list of ‘the first 1000 words’ found in Frye’s Book of lists (1998).

  22. Word List Design • Between 144 and 2654 students provided pilot test data on each word • We kept each of the pilot forms short (68 words in Kindergarten, 80 in grades 1-3) • We administered 5 different forms of the Word Reading Fluency test to students in Kindergarten, 4 forms to students in first grade, and 3 forms to students in third and fourth grade. • Each form contained 5 words that served as anchor items, common across all 15 forms of the test (and appearing in the same location)

  23. Passage Reading Fluency • Tests students’ ability to read connected narrative text accurately. In this individually-administered measure, students are shown a short narrative passage (approximately 250 words) • Omissions, hesitations, and misidentifications were counted as errors

  24. Passage Fluency Design • Measures were all written specifically for use in this progress monitoring assessment system. • All 80 passages were written by graduate students enrolled in College of Education courses in the winter of 2006 • Passage writers followed written test specifications and were systematically reviewed by Lead Coordinator and then teachers in field • Each passage was divided into three paragraphs of approximately even length and checked the readability of each paragraph using the Flesch-Kinkaid readability index (1.5, 2.5, 3.5, 4.5)

  25. Analysis • On word list, we used Rasch analysis to scale words on difficulty and ability • For passages, we analyzed correlations and mean differences between the different forms of the measures using a repeated measures analysis • Variations in passage outcomes were reduced by rewriting passages

  26. Results of Word List • Initial analyses revealed 283 words outside the acceptable Mean Square Outfit range of 0.50 – 1.50. These items were dropped from the item bank, resulting in 465 remaining words • List created with the easiest words appearing first in the list and subsequent words increasing in difficulty

  27. Word List – Easiest 10

  28. Word List – Most Difficult 10

  29. Grade 3 Passages

  30. Grade 4 Passages

  31. MC Reading Comprehension • We developed the MC Comprehension Tests in a two-step process. • First, we wrote the stories that were used as the basis for each test • Then, we wrote the test items associated with each story • We embedded quality control and content review processes in both these steps throughout instrument development • Stories were narrative fiction of approximately 1500 words with three types of items written from them: literal, inferential, and evaluative • 20 items per story were developed with 6-7 items of each type noted above; 3-options were provided

  32. Authors of MC Test • The lead author, who oversaw the creation and revision of the stories and test items earned her Bachelor of Arts degree in Literature from Carleton College in 1990, worked for twelve years as an English teacher in California public schools, was awarded National Board for Professional Teaching Standards certification in Adolescent and Young Adulthood English Language Arts in 2002, and was a Ph.D. candidate in the area of Learning Assessments / System Performance at the University of Oregon at the time the measures were created. • The item writer earned his Ph.D. in education psychology, measurement and methodology from the University of Arizona. He has worked in education at the elementary and middle school levels, as well as in higher education and at the state level. He held a position as associate professor in the distance learning program for Northern Arizona University and served as director of assessment for a large metropolitan school district in Phoenix, Arizona. In addition, he served as state Director of Assessment and Deputy Associate Superintendent for Standards and Assessment at the Arizona Department of Education. He was a test development manager for Harcourt Assessment and has broad experience in assessment and test development

  33. Design of MC Test • We used a common-person / common item piloting design • The 20 different forms of each grade level measure were clustered into 5 groups, with 5 forms in each group • Each test grouping contained two overlapping forms, enabling concurrent analysis of all measures across the different student samples

  34. Sample Analysis

  35. Distractor Analysis

  36. Getting Started Menu PM or BYOA

  37. More on Getting Started Grade Group to CBM –  Difficulty – Measure

  38. Fluency and Comprehension

  39. Administering a Measure

  40. Reporting Outcomes

  41. Reporting Outcomes - Diagnostic

  42. Instructional Records

  43. Reporting Outcomes - Formative

  44. Alternate Assessments based on Modified Academic Achievement Standards Standards-Based IEPs Gerald Tindal University of Oregon

  45. Menu and Options

  46. Overview

  47. Flow Chart

  48. Participation Options

  49. Perceptions

  50. Curriculum-Based Measures

More Related