1 / 16

External Evaluation of the 2011 – 2014 Demonstration Project

External Evaluation of the 2011 – 2014 Demonstration Project Presented at October 2013 Replication Forum. External Evaluators. The Meadows Center for Preventing Educational Risk PI: Dr. Saro Mohammed Researchers: Myriam Lopez, Deborah Van Kummer Concordia University

tuvya
Download Presentation

External Evaluation of the 2011 – 2014 Demonstration Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. External Evaluation of the 2011 – 2014 Demonstration Project Presented at October 2013 Replication Forum

  2. External Evaluators • The Meadows Center for Preventing Educational Risk • PI: Dr. Saro Mohammed • Researchers: Myriam Lopez, Deborah Van Kummer • Concordia University • Site visits: Students in the Educational Administration Master’s program

  3. Logic Model

  4. Theory of Change – Regional Level [Funders, partners, Region 13 ESC, and participating schools] [train and support PDers and coaches] to [change the number of regional Pders, conferences, and schools] leading to [regional collaboration between PDers, districts, and schools] and eventually [embedding, awareness, and use of SIM regionally]

  5. Theory of Change – School Level [Teachers of struggling students – highly mobile, economically disadvantaged, with limited English proficiency, experiencing achievement gaps in reading][receive training, feedback, support, and implement SIM] to [change the number of classes, and students using SIM] leading to [teacher collaboration, student engagement, academic achievement, and accurate SLD referrals] and eventually [multidisciplinary student use of SIM, and positive student behaviors, and high school outcomes]

  6. Evaluation Methods • Based on detailed evaluation logic model • Schools evaluated on: Outputs: • Process metrics (Teachers trained, reviews, etc.) • Implementation fidelity (practices observed in walk-throughs, student feedback, etc.) Outcomes: • Change over year for struggling learners • TAKS/STAAR scale score comparison for all students, and raw scores for struggling students, versus comparison schools matched on size, demographic make-up, previous results

  7. Data Sources • All data collected by program staff EXCEPT • Site visits: classroom walkthroughs, device checklists, LLT meeting observations • State Assessments: TAKS 2011, STAAR 2012, STAAR 2013

  8. Outputs – Fidelity of Implementation (School Level) • Implementation has improved over 2 years

  9. Outputs – Fidelity of Implementation • Implementation is widespread

  10. Outcomes • Populations (defined in Fall 2011) • Struggling learners (project schools only) • Gates standard score of 85 or less • Pre and Post test scores (typically beginning and end of year) • All students • Took regular TAKS & STAAR (not modified versions of tests)

  11. Findings – Reading (Struggling students, Gates)

  12. Findings – Reading (Struggling students, Gates, Year 1) • For students identified as struggling in year 1, percentile changes in Gates from pre-test to post-test were notable: • 6th grade growth=3rd to 6th percentile; n=138 • 7th grade growth=4th to 10th percentile; n=124 • 8th grade growth=5th to 9th percentile; n=104

  13. Findings – Reading (Struggling students, Gates, Year 2) • Also, for students identified as struggling in year 2, percentile changes in Gates from pre-test to post-test were notable: • 6th grade growth=6th to 12th percentile; n=118 • 7th grade growth=2nd to 7th percentile; n=209 • 8th grade growth=5th to 6th percentile; n=154

  14. Comparison Schools • Created a “focal, local, comparison group” • Schools were matched on (in order): number of students, Eco Dis percentage, bilingual/LEP percentage, mobility percentage, ethnic makeup of student population, historical TAKS • Match schools kept within district where possible • All match schools were within Region 13

  15. Findings – Reading (All students) • No significant effects on reading yet on schools overall • Posttest (STAAR 2012 & STAAR 2013) means adjusted for pretest (TAKS 2011 & STAAR 2012 respectively) for 7th and 8th graders • Pooled standard deviations and posttest adjusted means were used where available • Within-grade effect sizes ranged from -0.09 to 0.05

  16. Evaluation Summary • Implementation process: project schools are being trained/supported in their implementation as intended • Output metrics (PD goals, practice usage) generally being achieved and acceptably consistent across schools • For struggling students, trends are positive and noteworthy • On proximal measures of reading, students who continue to struggle from year to year outpace expected annual growth (as determined by national norms) • In first and second years of implementation, as expected, no statistical difference in distal outcomes (state assessments) between project and match schools • Student academic growth much greater in most RAISEup schools versus comparison schools

More Related