1 / 61

Do My Data Count? Questions and Methods for Monitoring and Improving our Accountability Systems

This presentation explores questions to establish the validity of accountability systems and provides state examples for analyzing outcome data. It discusses methods to gather, interpret, and use evidence for improvement.

cranor
Download Presentation

Do My Data Count? Questions and Methods for Monitoring and Improving our Accountability Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Do My Data Count? Questions and Methods for Monitoring and Improving our Accountability Systems Dale Walker, Sara Gould, Charles Greenwood and Tina Yang University of Kansas, Early Childhood Outcome Center (ECO) Marguerite Hornback, Kansas Leadership Project, 619 Liaison Marybeth Wells, Idaho 619 Coordinator Early Childhood Outcomes Center

  2. Acknowledgement: Thanks are due to our Kansas colleagues who assisted with the development, administration and analysis of the COSF Survey and team process videos, and to the Kansas Part C and Kansas and Idaho Part B professionals who participated in the COSF process. Appreciation is also extended to our ECO and Kansas colleagues for always posing the next question.. Early Childhood Outcomes Center

  3. Purpose of this Presentation • Explore a range of questions to assist states in establishing the validity of their accountability systems • Illustrate with state examples how outcome data may be analyzed • Discuss ways to gather, interpret, and use evidence to improve accountability systems • Information synthesized from Guidance Document on Child Outcomes Validation to be distributed soon! Early Childhood Outcomes Center

  4. Validity of an Accountability System • An accountability system is valid when evidence is strong enough to conclude: • The system is accomplishing what it was intended to accomplish and not leading to unintended results • System components are working together toward accomplishing the purpose Early Childhood Outcomes Center

  5. What is Required to Validate our Accountability Systems? • Validity requires answering a number of logical questions demonstrating that the parts of the system are working as planned • Validity is improved by ensuring the quality and integrity of parts of the system • Validity requires continued monitoring, maintenance and improvement Early Childhood Outcomes Center

  6. Some Important Questions for Establishing the Validity of an Accountability System • Is fidelity of implementation of measures high? • Are measures sensitive to individual child differences and characteristics? • Are the outcomes related to measures? • What are the differences between entry and exit data? • Are outcomes sensitive to change over time? • Are those participating in the process adequately trained? Early Childhood Outcomes Center

  7. What Methods can be used to Assess System Fidelity? • COSF ratings and rating process, (including types of evidence used, e.g., parent input) • Team characteristics of those determining ratings • Meeting characteristics or format • Child characteristics • Demographics of programs or regions • Decision-making processes • Training information • Comparing ratings over time Early Childhood Outcomes Center

  8. Fidelity: Analysis of Process to Collect Outcomes Data: Video Analysis • Video observation • 55 volunteer teams in KS submitted team meeting videos and matching COSF forms for review • Tried to be representative of the state • Videos coded • Team characteristics • Meeting characteristics • Evidence used • Tools used (e.g., ECO decision tree) Early Childhood Outcomes Center

  9. Fidelity: Analysis of Process to Collect Data Using Surveys • Staff surveys • Presented and completed online using Survey Monkey • 279 were completed • Analyzed by research partners • May be summarized using Survey Monkey or other online data system Early Childhood Outcomes Center

  10. Fidelity: Analysis of Process to Collect Data Using State Databases • Kansas provided Part C and Part B data • Idaho provided Part B data • Included: COSF ratings, OSEP categories, child characteristics Early Childhood Outcomes Center

  11. Fidelity: Types of Evidence Used in COSF Rating Meetings (videos only) • Child Strengths (67-73% across outcome ratings) • Child Areas to Improve (64-80%) • Observations by professionals (51-73%) Early Childhood Outcomes Center

  12. Fidelity: Types of Evidence Used in COSF Rating Meetings (videos and surveys) • Assessment tools • Video- 55% used for all 3 ratings • Survey- 53% used one of Kansas’ most common assessments • Parent Input incorporated • Video- 47% • Survey- 76% • 39% contribute prior to meeting • 9% rate separately • 22% attend CSOF rating meeting Early Childhood Outcomes Center

  13. Fidelity: How can we interpret this information? • Assessment use • About half are consistently using a formal set of questions to assess child functioning • Parent involvement • Know how much to emphasize in training • Help teams problem-solve to improve parent involvement Early Childhood Outcomes Center

  14. Fidelity: Connection between COSF and Discussion (Video) • 67% documented assessment information but did not discuss results during meetings • 44% discussed observations during meetings but did not document in paperwork Early Childhood Outcomes Center

  15. How information about the Process has informed QA activities • Used to improve quality of the process • Refine the web-based application fields • Improve training and technical assistance • Refine research questions • Provide valid data for accountability and program improvement Early Childhood Outcomes Center

  16. Are Measures Sensitive to Individual and Group Differences and Characteristics? • Essential feature of measurement is sensitivity to individual differences in child performance • Child characteristics • Principal exceptionality • Gender • Program or Regional Differences Early Childhood Outcomes Center

  17. Frequency Distribution for one state’s three OSEP Outcomes for Part B Entry Early Childhood Outcomes Center

  18. Frequency Distribution for one state’s three OSEP Outcomes for Part C Entry Early Childhood Outcomes Center

  19. Interpreting Entry Rating Distributions • Entry rating distributions • If sensitive to differences in child functioning, should have children in every category • Should have more kids in the middle than at the extremes (1s and 7s) • 1s should reflect very severe exceptionalities • 7s are kids functioning at age level with no concerns- shouldn’t be many receiving services Early Childhood Outcomes Center

  20. Social Entry Rating by State Early Childhood Outcomes Center

  21. Interpreting Exit Ratings • Exit ratings • If distribution stays the same as at entry • Children are gaining at same rate as typical peers, but not catching up • If distribution moves “up”- numbers get higher • Children are closing the gap with typical peers • If ratings are still sensitive to differences in functioning, should still be variability across ratings Early Childhood Outcomes Center

  22. Interpreting Social Exit Ratings Early Childhood Outcomes Center

  23. How can we interpret changes in ratings over time? • Difference = 0: not gaining on typical peers, but still gaining skills • Difference > 0: gaining on typical peers • Difference < 0: falling farther behind typical peers • Would expect to see more of the first two categories than the last if system is effectively serving children Early Childhood Outcomes Center

  24. Social Rating Differences by State Early Childhood Outcomes Center

  25. Are a State’s OSEP Outcome Scores Sensitive to Progress Over Time? Examples from 2 States Early Childhood Outcomes Center

  26. Distributions Across Knowledge and Skills Outcome at Entry and Exit Early Childhood Outcomes Center

  27. Distributions Across Social Outcome at Entry and Exit Early Childhood Outcomes Center

  28. Comparison of State Entry Outcome Data from 2007 and 2008 Early Childhood Outcomes Center

  29. Importance of Looking at Exceptionality Related to Outcome • Ratings should reflect child exceptionality because an exceptionality affects functioning • DD ratings should generally be lower SL ratings because DD is a more pervasive exceptionality Early Childhood Outcomes Center

  30. Meets Needs by Principal Exceptionality and COSF Rating Early Childhood Outcomes Center

  31. Meets Needs by Principal Exceptionality and OSEP Category Early Childhood Outcomes Center

  32. Interpreting Exceptionality Results • Different exceptionalities should lead to different OSEP categories • More SL in E (rated higher to start with- less pervasive and easier to achieve gains) • More DD in D (gaining, but still some concerns- more pervasive and harder to achieve gains) Early Childhood Outcomes Center

  33. Gender Differences Ratings should generally be consistent across gender. If not, ratings or criteria might be biased. • Need to ensure that gender differences aren’t really exceptionality differences. • Some diagnoses are more common in one gender compared to the other. Early Childhood Outcomes Center

  34. Entry Outcome Ratings by Gender Early Childhood Outcomes Center

  35. Mean Differences and Ranges in the 3 Outcomes by Gender Early Childhood Outcomes Center

  36. Gender and Exceptionality Early Childhood Outcomes Center

  37. Importance of Exploring Gender Differences by Exceptionality • Because the same percentage of boys and girls are classified as DD and are classified as SL, rating differences are not the result of exceptionality differences. Early Childhood Outcomes Center

  38. Program or Regional Differences in Distribution of Outcome Scores • If programs in different parts of the state are serving similar children, then ratings should be similar across programs • If ratings are different across programs with similar children, check assessment tools, training, meeting/team characteristics Early Childhood Outcomes Center

  39. Program or Regional Differences in Distribution of Outcome Scores Early Childhood Outcomes Center

  40. Are the 3 Outcomes Related? • Expect there to be patterns of relationships across functional outcomes compared to domains Early Childhood Outcomes Center

  41. Correlations Across Outcomes at Entry Early Childhood Outcomes Center

  42. Mean Correlations Between Assessment Outcomes on BDI and COSF Rating Correlation between COSF Outcome Ratings And BDI Domain Scores Social vs. PerSocial = .65 Knowledge vs. Cognitive = .62 Meets Needs vs. Adaptive = .61 Early Childhood Outcomes Center

  43. Outcome Rating Differences by Measure • Use of different measures may be associated with different ratings because they provide different information • Different measures may also be associated with different Exceptionalities Early Childhood Outcomes Center

  44. Mean Knowledge and Skills Outcome Differences as a Function of Measure Early Childhood Outcomes Center

  45. Interpreting Team and Meeting Characteristics • Team characteristics • Team size and composition • Meeting characteristics • How teams meet • How parents are included Early Childhood Outcomes Center

  46. Team Composition Video: 93% 2-4 professionals Survey: 85% 2-4 professionals * 35% SLP, 30% ECE * 95% SLP, 70% ECE Early Childhood Outcomes Center

  47. How Do Teams Complete Outcome Information? • Do teams meet to determine ratings? (survey) • 41% always meet as a team • 42% sometimes meet as a team • 22% members contribute, but one person rates • 5% one person gathers all info and makes ratings • How teams meet at least sometimes (survey) • In person: 92% • Phone: 35% • Email: 33% Early Childhood Outcomes Center

  48. What Does Team Information Provide that is Helpful for Quality Assurance? • COSF process is intended to involve teams- happens some of the time • Teams are creative in how they meet- likely due to logistical constraints • Checks the fidelity of the system (if it’s being used as planned) • If we know how teams are meeting, can modify training to accommodate Early Childhood Outcomes Center

  49. Decision-Making Process Followed by Teams • Decision-making process: • Standardized steps • Consensus reached by teams • Deferring to a leader Early Childhood Outcomes Center

  50. What Steps Did Teams Use to Make Decisions? • Use of crosswalks (survey) • 59% reported that their team used • 94% reported using to map items and sections COSF outcomes. • ECO decision tree use • Video- 95% • 6% without discussing evidence (yes/no at each step) • Discuss evidence at each step, rate document • Discuss and document at each step • Survey- 81% Early Childhood Outcomes Center

More Related