1 / 101

Universal Screening Benchmarking

ACKNOWLEDGMENTS. I WISH TO THANK THESE INDIVIDUALS FOR SHARING THEIR INFORMATION AND WISDOM.. Problem-Solving Content Based on: Alliance for School-based P-S

orde
Download Presentation

Universal Screening Benchmarking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Universal Screening & Benchmarking Madi Phillips, Ph.D. NCSP ASPIRE Regional Coordinator madi.phillips@gmail.com

    2. ACKNOWLEDGMENTS

    3. Agenda Context for Problem Solving and Response to Intervention (RtI) in a 3-Tier Model Morning: Universal Screening & Benchmarking Concepts & Vocabulary Afternoon: Application Activities in Teams Take out new innovation and possibly data system and change assignment at the end Take out new innovation and possibly data system and change assignment at the end

    4. Without Problem Solving

    5. The “Old” Problem Solving Heuristic Why wasn’t the Teacher Assistance Team (TAT) model effective? A small subset of staff was attempting to impact a large group of students. Did not change the philosophical beliefs of staff. Most staff saw this model as a “hoop” to jump through to get to sped. The “neediest” students were not always referred. Sped students were usually not discussed.Why wasn’t the Teacher Assistance Team (TAT) model effective? A small subset of staff was attempting to impact a large group of students. Did not change the philosophical beliefs of staff. Most staff saw this model as a “hoop” to jump through to get to sped. The “neediest” students were not always referred. Sped students were usually not discussed.

    6. Our First Attempt at General Education with Support Why wasn’t the Teacher Assistance Team (TAT) model effective? A small subset of staff was attempting to impact a large group of students. Did not change the philosophical beliefs of staff. Most staff saw this model as a “hoop” to jump through to get to sped. The “neediest” students were not always referred. Sped students were usually not discussed. Effective is defined as making data based decisions to ensure all students are making progressEffective is defined as making data based decisions to ensure all students are making progress

    7. Building a 3-Tier Service Delivery Model

    9. Problem-Solving Steps: How Data-Based Decisions are made..

    10. The VISION: To Provide Effective Interventions to Meet the Needs of ALL Students Through Early and Scientifically Based Interventions Through Careful Systems Planning

    11. §300.35 Scientifically based research. Has the meaning given the term in section 9101(37) of the ESEA. 20 U.S.C. 1411(e)(2)(C)(xi)) Discussion: The definition of scientifically based research is important to the implementation of Part B of the Act and, therefore, we will include a reference to the definition of that term in section 9101(37) of the ESEA. Scientifically based research-- (a) Means research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs; and (b) Includes research that-- (1) Employs systematic, empirical methods that draw on observation or experiment; (2) Involves rigorous data analyses that are adequate to test the stated hypotheses and justify the general conclusions drawn; (3) Relies on measurements or observational methods that provide reliable and valid data across evaluators and observers, across multiple measurements and observations, and across studies by the same or different investigators; (4) Is evaluated using experimental or quasi-experimental designs in which individuals, entities, programs, or activities are assigned to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assignment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls; (5) Ensures that experimental studies are presented in sufficient detail and clarity to allow for replication or, at a minimum, offer the opportunity to build systematically on their findings; and (6) Has been accepted by a peer-reviewed journal or approved by a panel of independent experts through a comparably rigorous, objective, and scientific review. Federal Regulations Define Scientifically Based

    12. Illinois State Regulations Each district shall, no later than the beginning of the 2010-11 school year, implement the use of a process that determines how the child responds to scientific, research-based interventions as part of the evaluation procedure described in 34 CFR 300.304.

    13. Illinois State Regulations No later than January 1, 2009, each district shall develop a plan for the transition to the use of a process that determines how the child responds to scientific, research-based interventions as part of the evaluation procedure described in 34 CFR 300.304. Each district’s plan shall identify the resources the district will devote to this purpose and include an outline of the types of State-level assistance the district expects to need, with particular reference to the professional development necessary for its affected staff members to implement this process. The transition plan developed pursuant to this subsection (c) may be incorporated into a district’s district improvement plan (see 23 Ill. Adm. Code 1.85(b)) if one exists.

    14. Websites for Intervention Review Florida Center for Reading Research: www.fcrr.org Institute for the Development of Educational Achievement (IDEA; University of Oregon) http://reading.uoregon.edu/ Oregon Reading First Center: http://oregonreadingfirst.uoregon.edu/SIprograms.php Texas Center for Reading and Language Arts: www.texasreading.org Texas Reading Initiative: www.tea.state.tx.us University of Kansas Center for Research on Learning http://www.ku-crl.org/ Intervention Central: www.interventioncentral.org Kevin Feldman: www.scoe.org

    15. Positive Behavior Intervention Support: www.pbis.org, www.pbisillinois.org Safe & Civil Schools: www.safeandcivilschools.com Task Related Skills: Curriculum & Associates: Anita Archer Skills for School Success Advanced Skills for School Success http://www.curriculumassociates.com/ What Works Clearinghouse http://www.w-w-c.org/ The Collaborative for Academic, Social, and Emotional Learning http://www.casel.org/home/index.php

    16. Purposes of Assessment Who has problems? (Problem Identification) Why is the problem is occurring? (Problem Analysis) Is our instruction working to fix the problem? (Plan Development & Implementation) How well are we doing overall? (Plan Evaluation) Purposes animate in one at a time. 1. Screening Assessment - Who has problems? Gain quick impressions of skills Usually administer to all students Brief and economical to administer Sensitive to target skills (red flags) Over-identifies - not under-identifies Allow for teacher judgment - student not identified through screening can still receive 2. Diagnostic Assessment - What’s the problem? Closely examine specific skills Usually administer to some students (ID’d through screening) Detailed and specific Flexible administration (more here, less there) Used to plan instructional interventions 3. Progress Monitoring - Is our instruction working to fix the problem? Track student progress - show change Individual or group administration Brief and economical to administer Rapid feedback Sensitive to small amounts of growth Repeatable Used to track intervention progress 4. Outcome Assessment - How well are we doing overall? Evaluate general outcomes and meet accountability requirements Administered to all students Strong technical adequacy (reliable & valid) Reflective of general outcome Economical for large-scale administrationPurposes animate in one at a time. 1. Screening Assessment - Who has problems? Gain quick impressions of skills Usually administer to all students Brief and economical to administer Sensitive to target skills (red flags) Over-identifies - not under-identifies Allow for teacher judgment - student not identified through screening can still receive 2. Diagnostic Assessment -What’s the problem? Closely examine specific skills Usually administer to some students (ID’d through screening) Detailed and specific Flexible administration (more here, less there) Used to plan instructional interventions 3. Progress Monitoring -Is our instruction working to fix the problem? Track student progress - show change Individual or group administration Brief and economical to administer Rapid feedback Sensitive to small amounts of growth Repeatable Used to track intervention progress 4. Outcome Assessment - How well are we doing overall? Evaluate general outcomes and meet accountability requirements Administered to all students Strong technical adequacy (reliable & valid) Reflective of general outcome Economical for large-scale administration

    17. Tools: Scientifically Based Progress Monitoring www.studentprogress.org

    20. What Does R-CBM Measure? Despite some attempts to put a standard measure of oral reading “into a box” when a student reads aloud, we are assigned a student’s overall general reading achievement skills. The best way to know how well a S reads is to LISTEN to them...under standard conditions. Now we know that some of you will be anxious that you know students who can do this, but don’t comprehend. That’s a complex issue we need to understand more fully. Comprehension results FROM reading but is NOT “reading”... This is critical to review. I think they’re sold that comprehension is the only true measure of reading. I don’t want to get into a debate, but the discussion will be helpful. Despite some attempts to put a standard measure of oral reading “into a box” when a student reads aloud, we are assigned a student’s overall general reading achievement skills. The best way to know how well a S reads is to LISTEN to them...under standard conditions. Now we know that some of you will be anxious that you know students who can do this, but don’t comprehend. That’s a complex issue we need to understand more fully. Comprehension results FROM reading but is NOT “reading”... This is critical to review. I think they’re sold that comprehension is the only true measure of reading. I don’t want to get into a debate, but the discussion will be helpful.

    21. Technically Adequate from Fuchs, L.S., Fuchs, D., & Maxwell,L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9, 20-28.

    24. Assessment purposes across the top, essential components of reading down the side. Need to decide what assessments fit in each cell - look for gaps and overlaps Etc… scrolls in at the end - may want to consider other assessment concepts - ie., specifically talk about decoding skills or measure motivation, or some other component of interest.Assessment purposes across the top, essential components of reading down the side. Need to decide what assessments fit in each cell - look for gaps and overlaps Etc… scrolls in at the end - may want to consider other assessment concepts - ie., specifically talk about decoding skills or measure motivation, or some other component of interest.

    25. Tier 1: Problem Identification Question: What is the discrepancy between what is expected and what is occurring? A. List problem behavior(s) and prioritize. B. Collect baseline data on primary area of concern (target student and peer). Record Review Interview Observation Testing C. State discrepancy between target student(s) performance and peer performance.

    26. Methods of Measuring Performance Discrepancies Norm-Based Approaches Percentile Rank Cut Scores Discrepancy Ratios Standards-Based Approaches Cut Scores for ISAT Illinois AIMSweb Standards Oregon DIBELS Standards

    27. Tier 1: Problem Identification Determine whether there is a discrepancy between…. Your school/grade level’s universal student percentage vs. goal of 80%. What standards do you use to determine 80%? DIBELS criteria (dibels.uoregon.edu) CBM target scores for ISAT Illinois AIMSweb Norms

    31. Standards-Based Approach Oregon DIBELS Standards

    34. Standards-Based Approach Illinois AIMSweb Standards

    35. PBIS Office Discipline Referrals Elementary

    36. PBIS Office Discipline Referrals Middle School

    37. SWISTM summary 05-06 (Majors Only) 1668 schools, 838,184 students

    39. Example of One School’s CBM Target Scores for Meeting/Exceeding on the ISAT

    40. Tier 2: Problem Identification Determine whether there is a discrepancy between…. Your school/grade level’s targeted student percentage vs. goal of 15%. What standards do you use to determine 15%? DIBELS criteria (dibels.uoregon.edu) CBM target scores for ISAT Illinois AIMSweb Norms

    41. Tier 2 & 3 Individual Student Problem Identification What is the discrepancy between what is expected (5% or less students in Tier 3; 15% or less in Tier 2) and what is occurring? If more than 5% (or 15%) of students are in Tier 2 (or Tier 3), examine the Tier 1 to make changes.

    42. Individual Problem Identification continued If Tier 3 includes 5% or less of students, it makes sense to move on to individual problem solving Looking at an individual student, define the problem, collect data, & examine the discrepancy between what is expected and what is occurring

    43. Step 1: Problem Identification Question: What is the discrepancy between what is expected and what is occurring? A. List problem behavior(s) and prioritize. B. Collect baseline data on primary area of concern (target student and peer). Record Review Interview Observation Testing C. State discrepancy between target student performance and peer performance.

    44. List Problem Behaviors and Prioritize Teams should tackle one problem at a time. Consider the following problems first: Dangerous/Severe behaviors High frequency behaviors Foundational behaviors (e.g., reading) Chronic problem behaviors State the primary area of concern. Define behavior on which team is collecting data in observable and measurable terms. When possible, define the behavior you want to see. Gain consensus.

    45. Data can be collected from a number of sources: R = Record Review I = Interview O = Observation T = Testing And in a number of domains: Instruction Curriculum Environment Learner B. Collect Baseline Data on Primary Area of Concern For more difficult behavior (or secondary level - where student encounters several people throughout the day) you may want to write 3 examples and non-examples to help observers.For more difficult behavior (or secondary level - where student encounters several people throughout the day) you may want to write 3 examples and non-examples to help observers.

    46. Collect only what you need to determine the discrepancy between what is expected (peer performance) and what is occurring (target student performance). Use existing data when possible: Records (e.g., attendance) CBM/DIBELS benchmarking data Collect additional information when needed: Interview Observation (e.g., Frequency Count, On-task). Mention that the tools used for things like observation, testing, etc. are used throughout the problem-solving process… in prob id, analysis, progress monitoring, etc.Mention that the tools used for things like observation, testing, etc. are used throughout the problem-solving process… in prob id, analysis, progress monitoring, etc.

    47. C. State Discrepancy Be objective. Does it refer to an observable characteristic of behavior? Be clear. Can others read the discrepancy statement and observe it easily? Calculate the discrepancy ratio Include statement of student’s current level of performance. Include statement of the expected level of performance (e.g., peer data, teacher expectation).

    50. Discrepancy Ratios Quantify how many times the student’s current level of performance varies from that of his/her peers. In order to calculate a discrepancy ratio use the following formula: Peer Behavior / Target Behavior Example:When given a 4th grade AIMSweb probe, Jessica is reading 55 correct words per minute, while average 4th grade peers are reading 145 correct words per minute. Peer Behavior/Target Behavior 145/55 = 2.63

    51. Enables team to make decisions about levels of support and resource from the start. Generally speaking… A student who is 2x discrepant from his/her peers is appropriate for the problem-solving team. If a student is significantly discrepant from peers, additional problem-solving and intervention resources may be appropriate. Example: Jessica is 2.63x discrepant from peers and may benefit from problem solving.

    52. Provides a way to evaluate student outcomes and the effectiveness of an intervention to reduce initial performance discrepancies.

    53. When do you use the discrepancy formula? Tier 2 & 3 individual problem solving only In cases when you don’t have norms or clear benchmark criteria

    58. Step 1: Who is in Charge? Identifying a Benchmark Coordinator… Could be, but need not be, the School Principal Even if District-wide, Should Also Be Within-the-Building

    60. Preparing Staff Have Professional Articles Available Use Sample Products (Access to Internet and Video) Layout Timelines, Sources of Support and Training

    61. When Will We Test?

    62. Rational for Benchmark Testing Schedule

    63. Testing Time Planning Sheet

    64. What to Test? Kindergarten: Letter Naming Fluency Letter Sound Fluency Phoneme Segmentation Fluency Nonsense Word Fluency 1st Grade: Phoneme Segmentation Fluency Nonsense Word Fluency Reading-CBM 2nd-8th Grade: Reading-CBM

    65. Things You Need Before Testing Letter Naming Fluency Administration and Scoring of Early Literacy Measures for Use with AIMSweb Training Workbook p. 15 Administration and Scoring of Early Literacy Measures for Use with AIMSweb Training Workbook p. 15

    66. Things You Need Before Testing Letter Sound Fluency Administration and Scoring of Early Literacy Measures for Use with AIMSweb Training Workbook p. 20 Administration and Scoring of Early Literacy Measures for Use with AIMSweb Training Workbook p. 20

    67. Things You Need Before Testing Phonemic Segmentation Fluency Administration and Scoring of Early Literacy Measures for Use with AIMSweb Training Workbook p. 25 Administration and Scoring of Early Literacy Measures for Use with AIMSweb Training Workbook p. 25

    68. Things You Need Before Testing Nonsense Word Fluency Administration and Scoring of Early Literacy Measures for Use with AIMSweb Training Workbook p. 34 Administration and Scoring of Early Literacy Measures for Use with AIMSweb Training Workbook p. 34

    71. How and What Will We Test?

    72. Who Will We Test?

    73. Who Will Do The Testing?

    74. Who Will Do The Testing?

    75. How to test? Identify Where Identify the People Plan the Schedule

    76. Where will the testing take place?

    77. Develop the Schedule

    78. Scheduling the Day Info on screen is adequateInfo on screen is adequate

    79. After Testing

    80. Grade Level Meetings

    81. Grade Level Meetings

    82. Part 1: Problem Identification

    84. Example of One School’s CBM Target Scores for Meeting/Exceeding on the ISAT

    85. Part 2: Problem Analysis

    86. List Students at Tier 2 & 3 Tier 3 Fall Benchmark Natalie 6 wrc Matthew 14 wrc Jack 17 wrc Allie 19 wrc John 19 wrc Kelly 26 wrc

    87. Part 3: Plan Development

    88. List Students at Tier 2 & 3 Tier 3 Fall Benchmark Natalie 6 wrc Matthew 14 wrc Jack 17 wrc Allie 19 wrc John 19 wrc Kelly 26 wrc

    89. Part 3: Plan Development

    90. Resources Matched with Interventions

    91. Part 3: Plan Development

    92. Reading Instruction in 3-Tiers

    93. Sample IPF:First Grade Can use the IPF to document all three aspects of a comprehensive intervention. Come up with a couple examples of that (when we come back from break). Emphasize how the plan is derived from the validated hypothesis.Can use the IPF to document all three aspects of a comprehensive intervention. Come up with a couple examples of that (when we come back from break). Emphasize how the plan is derived from the validated hypothesis.

    94. Sample IPF: First Grade Can use the IPF to document all three aspects of a comprehensive intervention. Come up with a couple examples of that (when we come back from break). Emphasize how the plan is derived from the validated hypothesis.Can use the IPF to document all three aspects of a comprehensive intervention. Come up with a couple examples of that (when we come back from break). Emphasize how the plan is derived from the validated hypothesis.

    95. Step 4: Plan Implementation & Evaluation

    96. Ex: 1st Grade PM (Tier 2)

    97. Ex: 1st Grade PM (Tier 2)

    98. Ex: 1st Grade PM (Tier 2)

    99. Ex: 2nd Grade PM (Tier 2)

    100. Ex: 2nd Grade PM (Tier 3*)

    101. Ex: 2nd Grade PM (Tier 3*)

    102. Ex: 3rd Grade PM (Tier 3)

More Related