1 / 24

A School-Based Reading Program Evaluation

A School-Based Reading Program Evaluation. Michael F. Lewis, Ph.D. Niagara Falls City School District. Changing Education Funding. Current changes in educational funding New federal grant opportunities Private grant opportunities This funding often requires increased accountability.

urbano
Download Presentation

A School-Based Reading Program Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A School-Based Reading Program Evaluation Michael F. Lewis, Ph.D. Niagara Falls City School District

  2. Changing Education Funding • Current changes in educational funding • New federal grant opportunities • Private grant opportunities • This funding often requires increased accountability

  3. Accountability • Landscape in the U.S. has shifted to mandated levels of accountability in many areas due to recent events • Deceptive accounting in business • ENRON • Abuse of school district budgets • Roslyn School District (New York)

  4. Accountability • Recently, when funding is awarded accountability is achieved through program evaluation • program evaluation required by independent evaluators • Mandated compensation (7%) • Strict accounting of expenditures • The budget is fixed

  5. What is Program Evaluation? • Definition: • a formalized approach to studying the goals, processes, and impacts of projects, policies and programs • can involve quantitative or qualitative methods of social research (or both) • People who do program evaluation come from many different backgrounds: • sociology, psychology, economics, social work

  6. Types of Program Evaluation • A needs assessment examines the nature of the problem that the program is meant to address • The program theory is the formal description of the program's concept and design • Process analysis evaluates how the program is being implemented • The impact evaluation determines the causal effects of the program • Cost-effectiveness analysis assesses the efficiency of a program.

  7. Impact Evaluation • Impact evaluation is the most common form of program evaluation • Determines (as best as possible) the effects of a particular program along some criteria • DARE (on decreasing drug use) • PBIS (on reducing negative behaviors in school) • RTI (on reducing # of students identified as LD)

  8. Big ‘R’ research University Run For Publication Large Scale Little ‘r’ research Individually Run For Information Small Scale Evaluation as ResearchAs conceptualized by: Stephen Truscott, Psy.D. (Georgia State University) Program Evaluation can be both or either

  9. The role of the School Psychologist • We do EVERYTHING!!! • SP has evolved: • We are now Jack (and mostly Jill) of all trades • We were special education evaluators • We are now increasingly responsible for many activities related to General Education • Including evaluation of programs

  10. Program Evaluation In Action • Niagara Falls School District evaluation of: • Fast ForWord (FFWD) • Computer-based • Auditory Processing and Literacy Skills • Timed ‘protocols’ for going through program • 50, 90, and 120 minute protocols • The Problem: FFWD has never appeared in peer-reviewed literature

  11. The Problem (continued) • Fast ForWord • Proprietary program • No real scientific evaluation of program • How do we know it is really effective? • We do a program evaluation on student’s who use FFWD…

  12. FFWD Evaluation • The Subject • Administration of FFWD to entire 2nd grade • 50 minute protocol (run every day) • The Evaluation • Pre-test/Post-test design • Evaluate every 2nd grader in reading • Analyze findings for increase in reading scores

  13. FFWD Evaluation • To evaluate you must have a measure • The Measure: GRADE • Group • Reading • Assessment (and) • Diagnostic • Evaluation • Standardized, norm-referenced

  14. FFWD Evaluation • To evaluate you must have a measure • The other Measure: DRA • Diagnostic • Reading • Assessment (This is a tool we use locally for Reading level)

  15. FFWD Evaluation • The Procedure: • Every 2nd grader participated • Daily 50-minute FFWD protocol • Ran for 20 weeks • Took GRADE before andafter FFWD • Classroom instruction did not change

  16. FFWD Evaluation • The Procedure (a summary) 1) GRADE pre-test 2) 20 weeks of FFWD (50 minute protocol) 3) GRADE post-test 4) Score and analyze GRADE results 5) Determine effects of FFWD on reading

  17. FFWD Evaluation • Statistics: • Imported GRADE data into SPSS • SPSS: Statistical Package for Social Sciences • Computed Paired Samples t-tests on all students with pre/post GRADE data • Same computation for DRA levels • This determined statistical significance • Not the whole story…

  18. FFWD Evaluation • Statistics: • Statistical Significance vs. Effect Size • Effect size determines a measurable degree of the statistical significance • Effect size reported in standard deviation form • Evaluation can have statistical significance but a small Effect Size

  19. FFWD Findings • Paired Sample t-Test

  20. FFWD Findings • Paired Sample t-Test

  21. FFWD Findings • Statistical Significance vs. Effect size • With a large sample it is highly likely that even a small change will indicate a statistical significance • ≈380 students is a large sample size

  22. FFWD Findings • Effect Size takes more realistic look at actual increase of significant findings • Hedge’s G: one way to calculate effect size • g = t √(n1 + n2) / √(n1n2) or • g = 2t / √ N • Effect Size findings: • Vocabulary: g = .27 • Comprehension: g = .35 • Total Test: g = .33 These are considered small effect sizes

  23. Errors • Inefficiency • No need for testing of all students • Maintain statistical meaning w/ smaller random sample • this was ignored by district administration • Administration • Testing all students reduces control of standard administration • this was ignored by district administration • Evaluation Design • No Control Group • No way of determining if FFWD caused increased GRADE scores w/out control group

  24. ???Questions??? • Program Evaluation Reference: • Posavac, E. & Carey, R. (2006). Program Evaluation: Methods and Case Studies (7th Ed.). Prentice Hall, New York, NY. • Contact: • mlewis@nfschools.net

More Related