1 / 76

Universal Screening of Academics and Behavior in an RTI Framework

Universal Screening of Academics and Behavior in an RTI Framework. Daryl Mellard April 1-2, 2008 Virginia’s RTI Institute Fredericksburg, VA. A collaboration of Vanderbilt University and the University of Kansas

ismail
Download Presentation

Universal Screening of Academics and Behavior in an RTI Framework

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Universal Screening of Academics and Behavior in an RTI Framework Daryl Mellard April 1-2, 2008 Virginia’s RTI Institute Fredericksburg, VA A collaboration of Vanderbilt University and the University of Kansas Funded by U.S. Department of Education. Office of Special Education Programs, Judy Shanley, Project Officer Award No. H324U010004

  2. Acknowledgements from previous presentations • Marcia Invernizzi, U. of Virginia, November 2007 presentation at Roanoke • Hugh Catts, U. of Kansas, April 2006 presentation at NRCLD National SEA conference on RTI, KCMO (available at http://nrcld.org/sea/index.html)

  3. A Little Overview • General principles about screening • Early reading screening • Behavioral screening

  4. Screening Component in an RTI Framework • Academic and behavioral prediction • Measures that are quick, low cost, repeatable, critical (predictive) skills, minimal administration training • Question: Student at-risk? • Affirmative answer: More attention (assessment/ intervention) to class or student • Criteria: Criterion benchmark or normative

  5. Screening Accuracy Three influences: • Base rate. • Diagnosticity. • Values of those setting the cutoff (criterion) score (i.e., resource priorities).

  6. Constructing Screening Measures • Match the curricular demands facing students • Proximity of the screening test to the criterion performance (Closer should yield higher accuracy.) • Should include multiple, related indicators

  7. Screening Accuracy • Particular attention is given to the accuracy of screening instruments • Errors in identification can be costly - over identification - under identification • Accuracy typically quantified within a clinical decision making model

  8. At risk Not at risk RD Outcome Normal Clinical Decision Making Model Screen Sensitivity True Positive a False Negative b a / a + b Specificity False Positive c True Negative d d / c + d Positive Predictive Power Negative Predictive Power a / a + c d / b + d

  9. Accuracy of screening is determined by … • How well your instrument separates those who eventually will have a problem from those who will not • What you choose as a cut-off score

  10. TP FN 100 0 FP TN 0 100 The Ultimate Screen

  11. More Typical Screen TP FN 80 20 FP TN 20 80

  12. More Typical Screen TP FN 90 5 FP TN 35 70

  13. ROC Curvehttp://www.anaesthetist.com/mnm/stats/roc/

  14. What to Measure? • What is the criterion? • Reading comprehension involves a mixture of complex abilities • Role of each changes over time

  15. Part 2:Early Reading Screening • Predictive indicators • Criterion measures • Remember: Time interval is a big influence

  16. What to Measure? • Variables related to early reading - letter knowledge - phonological awareness - rapid naming - vocabulary and grammar - reading itself (non-word or word reading)

  17. What to Measure? • Variables related to later reading - word reading - oral reading fluency - vocabulary and grammar - text comprehension

  18. Reading Screening MeasuresHugh Catts, April 2006,http://nrcld.org/sea/index.html • Texas Primary Reading Inventory (Foorman et al., 1998). • Dynamic assessment model (O’Connor & Jenkins, 1999). • Catts, Fey, Zhang, & Tomblin (2001). • Dynamic Indicators of Basic Early Literacy Skills (DIBELS). • Phonological Awareness Literacy Screening (Invernizzi, Juel, Swank & Meier, 1997). • CBM tools.

  19. Early Screening Tools • Comprehensive Test of Phonological Processing • Test of Phonological Awareness • Test of Early Reading Ability All correlated with reading outcomes (moderate range), but little data on sensitivity and specificity

  20. Texas Primary Reading Inventory(Foorman et al., 1998- www.tpri.org) • Designed to be used by teachers to identify children at risk for RD and to further evaluate their strengths and weaknesses in reading-related skills • 5 screens for K-2nd grade • Designed to hold false negatives to a minimum • Includes an inventory of secondary measures to help rule out false positives

  21. At risk Not at risk RD Outcome Normal TPRI (1998)K (Dec) predicting end of 1st Screen (shorten version) Sensitivity Base rate 23% 94.8% 92 5 Specificity 55.9% 143 181 Risk rate 55.8% Positive Predictive Power 39.1% Negative Predictive Power 97.3%

  22. Dynamic Indicators of Basic Early Literacy Skills (DIBELS) • Standardized and readily available www.dibels.uoregon.edu www.aimsweb.com • Developed to monitor progress and inform instruction

  23. At risk Not at risk RD Outcome Normal DIBELSK (Fall) predicting end of 1st Screen ( Initial sound fluency, Letter name fluency) Sensitivity Base rate 32.5% 82.5% 8577 1824 Specificity 56.7% 9345 12258 Risk rate 56.0% Positive Predictive Power 47.9% Negative Predictive Power 87.0%

  24. CBM Tools • Letter-Name Fluency • Letter-Sound Fluency • Initial-Sound Fluency • Phoneme Segmentation Fluency • Nonword Reading Fluency • Oral Reading Fluency • Oral Retell Fluency • Maze Fluency

  25. CBM Tools • Assessments given 3 or more times a year to evaluate growth in reading (meeting benchmarks) • Each can be considered a screening opportunity

  26. O’Connor & Jenkins (1999) • Large battery of preliteracy skills - rapid letter naming (# of letters named from random list in 1 min) - syllable deletion (say “baseball” without “ball”) - segmenting phonemes (tell me how many sounds in “saw”) - phoneme repetition (say “p” “I” “f” ) • Chose cut-off scores to maximize sensitivity

  27. At risk Not at risk RD Outcome Normal O’Connor & Jenkins (1999)K (Nov) predicting April of 1st Screen (phonem seg, RLN, deletion) Sensitivity Base rate 6.5% 100% 15 0 Specificity 89.3% 23 192 Risk rate 16.5% Positive Predictive Power 39.5% Negative Predictive Power 100%

  28. At risk Not at risk RD Outcome Normal TPRI (1998)1st (Oct) predicting end of 1st Screen (letter-sound, blending, word reading) Sensitivity Base rate 19.9% 93.3% 111 8 Specificity 63.5% 175 305 Risk rate 47.7% Positive Predictive Power 38.8% Negative Predictive Power 97.4%

  29. At risk Not at risk RD Outcome Normal DIBELSBeginning 1st NWF predicting end 1st ORF Sensitivity Base rate 32.6% 71.7% 7477 2956 Specificity 76.6% 5067 16544 Risk rate 39.1% Positive Predictive Power 59.6% Negative Predictive Power 84.8%

  30. Dynamic Assessment • May have advantage over static assessment • Measurement of ability over time in order to monitor progress • Measurement of learners’ potential over the short term • Assessor actively intervenes during the course of the assessment with the goal of intentionally inducing changes in the learner's current level of performance. • “Mini-assessment” of response to intervention

  31. At risk Not at risk RD Outcome Normal O’Connor & Jenkins (1999)Oct 1st predicting April 1st Screen (phoneme seg, RLN, phoneme repetition) Sensitivity Base rate 5.1% 100% 11 0 Specificity 87.3% 26 178 Risk rate 17.2% Positive Predictive Power 29.7% Negative Predictive Power 100%

  32. O’Connor & Jenkins (1999) • Dynamic Assessment - phoneme segmentation - used Elkonin boxes to progressively teach segmentation of a set of test items words - score based on the number of trials needed to master the task

  33. At risk Not at risk RD Outcome Normal O’Conner & Jenkins (1999)Oct 1st predicting April 1st (dynamic) Screen Sensitivity 90.9% 10 1 Base rate 5.1% Specificity 95.6% 9 195 Risk rate 8.8% Positive Predictive Power 52.6% Negative Predictive Power 99.5%

  34. Compton, Fuchs, Fuchs, & Bryant (2006) • Screened in 1st (Oct) predicting end of 2nd • Measures - CTOPP Sound Matching - CTOPP Rapid Digit Naming - WJPB-R Oral Vocabulary - Word Identification Fluency (WIF) Initial level, 5-week slope

  35. Teacher: Read these words. Time: 1 minute. two for come because last from ... Grade 1Word-Identification Fluency

  36. At risk Not at risk RD Outcome Normal Compton et al. (2006)1st (Oct) predicting end of 2nd Screen (includes WIF level & slope – CTA) Sensitivity 94.6% 35 2 Specificity 91.7% 14 155 Positive Predictive Power 71.4% Negative Predictive Power 98.7%

  37. Beyond First grade • Most common screening for Tier 2 has been oral reading fluency (ORF) • ORF strongly correlated with 3rd grade state assessments • Strong correlations do not necessarily translate into high sensitivity and specificity • Measurement of level and slope may help (e.g., dual discrepancy) • Must deal with potential scaling problems

  38. What does research tell us about screening? • Can identify children at risk for reading problems • Can be done as early as the fall of kindergarten • Need to choose measures carefully • Must match measures to curriculum - letter naming - phonological awareness - word reading - text reading • Must not forget about other factors related to comprehension - oral language

  39. What does research tell us about screening? • False positive rates are high and efforts need to be in place to limit the cost of over prediction • Progress monitoring within a RTI framework may serve this purpose • Need to equate forms to scale • Dynamic assessment has potential

  40. Screening for Possible Reading Risk Note: These figures may change pending additional RTI research.

  41. Tier 1–Primary Prevention:Confirming Risk Status With PM • At the end of 5-8 weeks, student risk status is confirmed or disconfirmed. Note: These figures may change pending additional RTI research.

  42. Universal Literacy Screening: First Steps Toward Prevention & Intervention Dr. Marcia Invernizzi University of Virginia

  43. Universal Literacy Screening • Screen routinely • Fall – Mid-year – Spring • Why? • Ensures that students who need additional support do not go too long before receiving additional instruction/intervention. • Helps identify the “point of entry” into the tiers of RtI intervention & the kinds of supports needed. • Monitors student growth over time

  44. Phonological Awareness Literacy Screening The state-provided screening tool for Virginia’s EIRI Consists of three instruments: PALS-PreK (for preschoolers) PALS-K (for students in kindergarten) PALS 1-3 (for students in grades 1-3) Measures young children’s knowledge of important literacy fundamentals.

  45. PALS Instrument Content Areas 45

  46. Universal Literacy Screening • Purpose #1: • Identification of children in need of further assessment and/or intervention • Solution: • PALS class reports available on the PALS website: http://pals.virginia.edu

  47. Universal Literacy Screening • Purpose #2: • Provision of feedback about how a class is performing so that classroom-based curriculum or instructional issues can be identified as soon as possible. • Solution: • PALS class reports available on the PALS website: http://pals.virginia.edu

  48. 49

More Related