1 / 31

Making Data Driven Decisions: Cut points, Curve Analysis and Odd Balls

Making Data Driven Decisions: Cut points, Curve Analysis and Odd Balls. Laura Lent IU 13 Pennsylvania. District-Wide RtI Development. Essential Question: How do you use data to facilitate systemic paradigm shifts?. District Demographics. 3200 students total

heather-cox
Download Presentation

Making Data Driven Decisions: Cut points, Curve Analysis and Odd Balls

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Making Data Driven Decisions:Cut points, Curve Analysis and Odd Balls Laura Lent IU 13 Pennsylvania

  2. District-Wide RtI Development • Essential Question: How do you use data to facilitate systemic paradigm shifts?

  3. District Demographics • 3200 students total • 5 Elementary Buildings, 2 Secondary • 4 K-4 buildings ranging from 110 to 500 students • 1 5-6 building of 630 students • 25-35% FRL • <15% minority • Equivalent of 2.5 full-time school psychologists

  4. PSSA Scores: Reading • Warning=(W), School Improvement 1=SI 1

  5. Intensive Reform Focus • 5th/6th grade building the “identified patient” due to School Improvement status. • Building has 5 teacher teams of a pair of 5th grade teachers with a pair of 6th grade teachers. • Two 5th grade classrooms allocated as “ELM” or Essentials of Literacy and Math. • Each team has a learning support teacher assigned to it who does inclusion during social studies and science. • No universal screening. No consistent reading instructional practices.

  6. Data Collection • Use of CBMs for Universal Screening and Progress Monitoring including: • Early Literacy, Early Numeracy, MAZE, R-CBM, Math Applications and Single Digit Computation. • K Screening: Use of Individual Growth and Development Indicators (IGDIs)-Picture Naming and Rhyming and Early Numeracy in August to form heterogeneous classes.

  7. Benchmark Data Analysis • Move from DIBELS to AIMSweb allowed for determination of local benchmark criteria to be used in the reporting system. • Question: How do we set benchmarks that are sensitive and specific to PSSA performance?

  8. ROC Curve Analysis • Receiver Operating Characteristic (ROC) • Statistical Evaluation Process to identify benchmarks by identifying cut scores that are the most sensitive without sacrificing specificity. • Sensitivity: Fewest False Negatives • Specificity: Fewest False Positives

  9. Scatterplot Interpretation of Screening Results False Positives/ “Happy Surprises” Adapted from Silberglitt (2009). True Negatives PSSA Proficient True Positives False Negatives/ “Unhappy Surprises” R-CBM Low Risk

  10. Benefits of ROC Curve Analysis • Re-set Benchmarks for Greater Classification Accuracy. • Minimize the “word caller” hysteria. • Fringe Benefits: “Odd Ball” or Outlier Score Investigation.

  11. Acknowledgement Thanks to Dr. Edward Shapiro and Dr. Gini Hampton, Center for Promoting Research to Practice, Lehigh University, for sharing the proceeding slides of ROC curve analysis for this district.

  12. ROC Curve Analysis • R-CBM District Benchmarks- 25th %tile

  13. Probability Outcomes based on AIMSweb (25th %tile) Spring Benchmarks • Grade 4 = 104

  14. Probability Outcomes based on Reset R-CBM Spring Benchmarks • Grade 4 = 117

  15. Probability Outcomes based on AIMSweb (25th %tile) Spring Benchmarks • Grade 5 = 114

  16. Probability Outcomes based on Reset R-CBM Spring Benchmarks • Grade 5 = 140

  17. Probability Outcomes based on AIMSweb (25th %tile) Spring Benchmarks Grade 6 = 130

  18. Probability Outcomes based on Reset R-CBM Spring Benchmarks Grade 6 = 154

  19. ROC Curve Analysis to Reset Cut Points • Reset Spring Targets: R-CBM Proficient

  20. What to do about those odd balls? To investigate the scores that were false positives and false negatives or “happy surprises” and “unhappy surprises” the following variables were examined: • Current and historical PSSA performance • Demographic characteristics including ELL, Economic Disadvantage (ED), IEP status, gender and teacher assignment.

  21. False Positive Results • Students who were “at risk” or below the 10th percentile on the spring R-CBM but scored Proficient or above on PSSA.

  22. “Extreme” False Negative Resultsor Very Unhappy Surprises • Students who were “low risk” on spring RCBM and Below Basic on PSSA.

  23. Who are these students? • 5th Grade (n=17): • 11 Male, 6 Female • 7 with IEPs, 4 ED, 1 ELL • 3 scored Proficient in 4th grade • Teacher distribution: all had at least 1, three teachers held 3 or more. • 6th Grade (n=6): • 5 Female, 1 Male • 1 with IEP, 1 ELL • None scored Proficient in 5th grade

  24. “Near”False Negative Results • Students who scored “low risk” on RCBM and Basic on PSSA.

  25. Who are these kids? • Grade 3: • Evenly distributed across schools (n=3, n=5, n=5) • 3 students ED, 3 students IEP, 1 ELL • Grade 4: • Evenly distributed across schools (n=7, n=10, n=8) • 5 students “retainees”, 5 ED • *12/23 were students who previously scored Proficient!

  26. 5th Grade: 12/55 retentions, 12 ED (1 overlap), 2 IEP 31 fell from Proficient to Basic 2 fell from Advanced to Basic *33/47 or 70% were students who previously scored Proficient!

  27. 6th Grade: 6 were retainees, 5/ED, 4/IEP, 0 ELL 11/33 or 33% fell from Proficient to Basic!

  28. Teacher Distribution: 5th Grade • Total # False Negatives =72 • Of the 12 teachers, “F” had the most false positives outside of the combined “ELM” classes.

  29. Teacher Distribution: 6th Grade • Total Number of False Negatives=33

  30. Local Impact of “Odd Ball” Analysis • Consensus-Building: • Administration conceded that major infrastructure and implementation changes were needed. Staff agreed. • Infrastructure: • 5/6 teams were restructured into 5th and 6th only teams. Weaker teams paired with stronger and 5th grade classrooms moved closer to the office. • Implementation: • 6 full days professional development on core reading instruction. • Administrators attend all inservice trainings and all team meetings. • R-CBM targets re-set for 09-10 to attempt to better identify students who need targeted intervention. • Targeted intervention provided daily for 45 minutes by all classroom teachers and interventionists. Groups matched to identified need.

  31. Questions for Further Analysis • Do the False Negatives fail to identify those with comprehension problems or fail to identify those who are failing to receive adequate instruction? • Would MAZE identify the same sample of False Negatives as “at risk”? • Would a diagnostic or benchmark measure based on grade level standards have identified these students as “at risk”?

More Related