1 / 35

Final Summary Evaluation: State Projects Serving Individuals with Deaf-Blindness

Final Summary Evaluation: State Projects Serving Individuals with Deaf-Blindness. October 23, 2006 NCDB Sponsored Webinar Presented by Richard Zeller. Presentation Overview. Self-assessment and verification review purpose and design

Download Presentation

Final Summary Evaluation: State Projects Serving Individuals with Deaf-Blindness

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Final Summary Evaluation: State Projects Serving Individuals with Deaf-Blindness October 23, 2006 NCDB Sponsored Webinar Presented by Richard Zeller

  2. Presentation Overview • Self-assessment and verification review purpose and design • Summary of Year One (37) and Year Two (11) project self-assessments • Summary of on-site reviews • Feedback from projects and review teams • Evaluator Recommendations • Discussion

  3. Evaluation Requirement During year 2, each project must... "conduct a comprehensive self-evaluation. The evaluation must include a review of the degree to which the project is meeting proposed objectives and goals and an evaluation of outcome data... In addition, the Department of Education intends to conduct a limited number of on-site evaluations based on a stratified randomized sample of sites.” (RFP, page C-4)

  4. Evaluation Purposes • Summative project evaluation questions: • Are projects goals and objectives being achieved? • Do projects address each RFP priority? • Do projects have appropriate outcome data? • Formative project evaluation: Provide a continuous improvement process for individual projects to use. • National report - both summative and formative: provide information OSEP can use to guide needed system improvements.

  5. Evaluation Constraints • 48 projects - single and multi-state • Staffing from partial to several FTE • Common design to allow summaries • Resources come primarily from the projects • Assess whether projects are addressing RFP priorities

  6. Evaluation Design • “Work scope” standards: Priorities (a) - (i) and General Requirements (a) - (c) • Priority questions: • What types of strategies are used? • Is work being completed in a timely fashion? • Are intended results being achieved? • Are outcome data available (efforts and effects)? • Are improvement plans in effect? • “General Requirements:” Are these requirements being appropriately addressed?

  7. Evaluation Design (continued) • Self-assessments parallel MSIP’s Continuous Improvement and Focused Monitoring System (now the SPP & APR). • Verification Reviews(site visits) during this evaluation, reviews became a check on the self-evaluation process and a way to provide TA to the project. • Adjustments made in both self-assessment and review designs during implementation.

  8. Self-Assessment Summary Priorities (a) - (e): • Strategies • Timeliness • Results • Data - Effort and Effect • Adjustments/Future Plans Priorities (g) - (h) and General (a) - (c): • Are the priorities addressed (Yes/No) • Are there standards that apply?

  9. Strategies Described While the relative application of “ongoing” strategies was higher in year two, they were also more distributed across projects than in year one.

  10. For all timeliness item ratings

  11. For all result item ratings:

  12. For all effort item ratings: Ratings cluster: 3 projects in year one and 2 in year two rated “extensive” data for more than 6 items

  13. For all effect item ratings: • Clarification: • Outcomes are the immediate results of your assistance (e.g., teacher skills gained in training) • Impacts are results your clients have when they apply what you have taught them (e.g., they teach & children learn communications skills)

  14. Overall Item Ranking (hi to lo)

  15. Areas Needing More Attention? (c)(2) Measure child outcomes (a)(2) Systemic change (b)(3) Assess state needs (b)(2) Assess child needs (c)(1) Evaluate effectiveness (c)(3) Advisory evaluation design?

  16. Adjustments/Future Plans • All projects/all strategies: about 81% (Year One) v. 63% (Year Two) of strategies are to “continue as proposed” • In Year One, 8 projects accounted for 55% of planned changes • In Year Two, all projects plan some changes in strategy, with 7 adopting new strategies • The most common areas of adjustment across both years were priorities (c)(1), evaluation and (c)(2), measurement of child outcomes

  17. Priorities (g) & (h)

  18. General Priorities (a) - (c)

  19. How Projects Relate Priorities to Work Design (Part 1, Year 2)

  20. Total Priority Cites by 11 Projects (Year 2, Part 1)

  21. Verification Visit Summary • Sites Visited (in order): • Year One: IN, FL, NJ, NY, WA, MO, CO, MT, CA • Year Two: KY, MI , NC, TN • Process: Team of 3 reviewers each rated their agreement with the Project’s ratings for each priority and offered comments on each priority. • Revisions to the process and report form were made during the first year (simplifications) and again before year two (in response to suggestions).

  22. Site Review Participation # Staff # Stakeholders IN 4 20 FL 5 12 NJ 5 8 NY 5 19 WA 4 14 CO 4 9 MT 4 10 MO 5 7 CA 8 12 KY 6 5 MI 5 13 NC 2 12 TN 3 5

  23. Who were the reviewers and how many sites did they visit? Reviewer Name Year One Year Two Zambone 1 Sharpton 1 1 Bove 2 McLetchie 2 2 Rafalowski Welch 2 Syler 2 2 Fankhauser 3 1 Dalke 4 3 Rachal 4 2 Steveley 6 1

  24. Did site reviews tend to validate project self-assessments? Agreement with project: “any reviewer’s rating of each project item self assessment rating” Agreed with Project Year One Year Two • Strongly Agree 85% 60% • Mostly Agree 10% 25% • Somewhat Agree 3% 2% • Somewhat Disagree 2% 1% • Strongly Disagree <1% 0%

  25. Were reviewer ratings (after discussion) reliable? • Agreement here was defined as “all reviewers rated the way on a given item” • Year One: 32 actual disagreements, or 96.9% complete agreement on all items • Year Two: 1 disagreement, or 99%+ agreement on all items • Only 5 sites where disagreements among reviewers occurred; most of them in one site

  26. How did projects and reviewers view the value of these two processes? • Self-assessment and improvement planning is a necessary function for the system of state projects. • The past and current processes and forms are complex and redundant, given the way the work is organized. • Both processes (self-assessments and review visits) have value, but both need substantial redesign.

  27. Projects’ View of the Value of the Self-Assessments Project Ratings (# reporting): Year 1 Year 2 High value 9 3 Moderate value 18 7 Some value 9 1 More trouble than it was worth 1

  28. Year Two

  29. What Some Projects Liked • Prompted communication with state program sites • Forced staff to consider value of work • The forms forced project to focus and limit narrative • Improvement over earlier self-evaluation processes • Aligned proposal to RFP, so not hard to use • Format was easy and more logical [than year 1] • Separate narrative allowed project to show how priorities were woven into goals & objectives

  30. What Projects Didn’t Like • Priorities, criteria & proposed work not aligned • Evaluation rules were not in the RFP • Accessibility problems with form • Redundancy (e.g., attachments & narratives) • Word functions don’t work in the template form • Too many reports for one year • Too much time - takes away from TA • Form accessibility (couldn’t enlarge print?) • Narrative, priorities & ratings in three documents • Format - impossible to match priorities to activities

  31. Review Team Recommendations to Sites: • Expand partnerships (B, C, 619, others) - others must do the work of system change • Family networking/support (parent-to-parent) • Define/structure TA and intent - child change, local capacity building, systems change • Systematize data collection (census, needs, efforts and effects on individuals/systems) • Use evaluation for program improvement

  32. Review Team Suggestions In future evaluation & review processes: • Better self-assessment instructions • Consolidate Progress Report & Project Evaluation • Clarify evaluation standards in the RFP • Cluster priorities (eliminate redundancies) • Value of the review process is the TA provided • Effort & effect need better definition • Change forms: Neither Year 1 or 2 worked for all • Align Priorities and evaluation model

  33. Evaluator Recommendations • The next RFP should have 5 program priorities (e.g., skill development, system capacity/change, child census/performance, family services, dissemination R-B practices) • Combine self-assessment and reporting in a single system with prescribed indicator measures for each priority for all projects • For larger projects (>$500K) adopt standard 3+2 procedures

  34. Discussion:Were Evaluation Purposes Achieved? • Summative project evaluation questions: • Are projects goals and objectives being achieved? • Do projects address each RFP priority? • Do projects have appropriate outcome data? • Formative project evaluation: Provide a continuous improvement process for individual projects to use. • National report - both summative and formative: provide information OSEP can use to guide needed system improvements.

More Related