1 / 30

“Performance” Accountability System: Key Considerations for Accountability System Design

This article discusses key considerations in designing a performance accountability system, including goals, performance indicators, design decisions, consequences, communication, and support. It also highlights tradeoffs, simplicity vs. validity, rich data, and the requirements of the SB180 Task Force.

simonej
Download Presentation

“Performance” Accountability System: Key Considerations for Accountability System Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “Performance” Accountability System: Key Considerations for Accountability System Design Scott Marion Center for Assessment August 25, 2009

  2. Perie, Park, & Klau (CCSSO, 2007) Seven core design principles: Goals Performance Indicators Design Decisions Consequences Communication Support System Evaluation, Monitoring, and Support

  3. Goals—What are we trying to do? It is critical to be explicit about the goals and purposes of the accountability system as well as the uses of the results These decisions will serve as touchstones during subsequent design decisions Thinking about the mechanisms for how the goals, purposes, and uses will be fulfilled is also very important.

  4. Reports Reports make the assessment results actionable—teachers and leaders do not have time to do in-depth analyses, therefore the reports must lead to valid decisions Designing the reports up front is a useful way to help think about accountability system design

  5. Outcomes/Consequences How will results be used (e.g. public reporting, rewards) to create incentives for intended behavior? How can we avoid unintended outcomes? We need to be clear about the manner in which the results will be used. We are limited in this regard by the constraints in SB180

  6. Utility We argue that utility is one of the most important considerations in accountability designs In other words, how will the system be designed so that it can most effectively fulfill its intended purposes?

  7. Tradeoffs Like the old saying, “there is no free lunch” accountability system designers attempt to find the perfect balance point among multiple constraints, pressures, and goals because we can rarely (if ever) get exactly what we want We highlight just a few here because we think they are especially applicable to this work

  8. Simplicity-Validity It has been said that for every complex problem, there is a simple solution…except it is wrong! We want to search for the simplest and most elegant solution possible, but the most simple is often not the most fair Often the most fair can be very complex, which makes it tough to understand and communicate We need to find the balance

  9. Rich Data—Data Burden Many have justifiably complained that systems such as NCLB focused too much on once/year test data in two subject areas We want rich pictures of school quality, but we do not want to burden school personnel into becoming our full-time data collectors We also want data that we can trust… Again, the search for balance!

  10. The NH “Performance” Accountability System

  11. SB 180 Task Force must… (a) Define the performance-based accountability system to be used by schools that will ensure that the opportunity for an adequate education is maintained. (b) Identify performance criteria and measurements. (c) Establish performance goals and the relative weights assigned to those goals. (d) Establish the basis, taking into account the totality of the performance measurements, for determining whether the opportunity for an adequate education exists, which may include the assignment of a value for performance on each measurement. (e) Ensure the integrity, accuracy, and validity of the performance methodology as a means of establishing that a school provided the opportunity for an adequate education as defined in RSA 193-E:2-a.

  12. SB 180 Requirements The task force shall develop a performance-based scoring system using only the best available data and indicators which are already provided to the department and/or performance measures that schools are already required to provide the department under other state or federal law.

  13. … system may consider one or more of the following data and indicators: (a) Performance on state tests administered pursuant to RSA 193-C and, upon the prior approval of the department, other assessments administered at local option that are consistent with the state’s curriculum standards. (b) Number and percentage of pupils participating in an advanced placement course. (c) Number and percentage of graduating pupils going on to post-secondary education and military service. (d) Attendance rates More….

  14. Potential indicators (continued) (e) Annual cumulative drop-out rates of high school pupils. (f) School environment indicators, such as safe schools data. (g) Expulsion and suspension rates, including in-school and out-of-school suspensions, which shall be reported for each school year. (h) Number and percentage of classes taught by highly qualified teachers. (i) Teacher and administrative turnover rates at the school and district levels.

  15. Goals of the “Performance” System Provide another opportunity for schools to demonstrate adequacy Collect and report data to assist educators in improving student achievement Identify desirable educational practices and outcomes to Facilitate public reporting of school effectiveness to education stakeholders

  16. Performance Indicators Inclusion: Are students participating in the system? Achievement:Are students learning and growing? Readiness:Are students prepared for success in secondary and post-secondary? Development:Are schools implementing effective practices to promote student achievement?

  17. Performance Indicators: Inclusion Participation: number of test takers divided by enrollment. Attendance: number of FAY students in all grades for that school with fewer than 15 absences for the full school year divided by the total number of students in all grades for that school enrolled for a full academic year.

  18. Performance Indicators: Achievement NECAP Status: Percent proficient, mean scale score Reading/ELA Writing Math Science? NECAP Growth: FTC growth metric, student growth percentiles, other? Other local assessment options?

  19. Performance Indicators: Readiness Postsecondary attendance Graduation rate Dropout rate Other AP courses and performance Concurrent enrollment SAT, PSAT, ACT, Plan,

  20. Performance Indicators: Development Based on the idea that assessment data alone (outputs) are not sufficient to gauge the quality of the school or the effectiveness of the pilot. Rather, a number of specific actions and interventions (inputs) must take place to support a process that improves student achievement and promote the theory of action.

  21. Performance Indicators: Development Highly qualified teachers Advanced degrees, etc Teacher/leader turnover rates School environment indicators School climate Safe and drug-free schools Expulsion/suspension rates We do not assume that ‘one size fits all’ for each school. One alternative may be for schools to pick from ‘menu’ of selections

  22. Performance Indicators: Development Two primary challenges: Collecting the data Evaluating the data Important to minimize burden to state department and local systems. Solution must be efficient and ‘scalable’

  23. Design Decisions What is the unit of analysis and/or accountability? Student Class Subgroup School District

  24. Design Decisions What measure will be reported? Attainment of target score Scale score Domain score Norm-referenced score (e.g. percentile)

  25. Design Decisions How should the measures be reported? Status: performance reported at one point in time (e.g. percent ‘proficient’) Improvement: performance reported over time for different students on the same measure (e.g. grade 3 performance in 2008 and grade 3 performance in 2009) Growth: evaluating change over time for the same student or cohort (e.g. change in percent proficient from grade 3 to grade 4) To whom should the reports be targeted? Differences in reports by stakeholder position (e.g., parents, teachers, leaders)?

  26. Design Decisions What is ‘good enough’ performance? What level of performance on each assessment should be regarded as ‘meeting standard’? What percent of students should meet target standard? How much improvement or growth should be expected? How do decisions impact different types of schools (e.g. high status/low growth vs. low status/ high growth)

  27. Design Decisions How can multiple indicators be combined? Conjunctive: schools must meet standard on all indicators Disjunctive: schools must meet standard on any one element (not likely to meet SB 180) Compensatory system/ Index: High performance in one area can offset lower performance in another Profile system: Categories of acceptable outcomes are identified, where a value in one area may influence another

  28. Index Example Four assessments in the model: ELA, mathematics, science, and writing Decision is made to value ELA and Mathematics over science and writing Structured such that attaining mean index score of 100 meets expectations Lower performance in one area can be offset in another

  29. Profile Example Four assessments in the model: ELA, mathematics, science, and writing Three possible outcomes that are deemed acceptable are presented Model values mathematics and ELA performance; no profile is acceptable without at least meeting standards in these areas Writing performance can be below standard if ELA exceeds standard Science can be below standard if math exceeds

  30. Questions for the group • What do we value and what would we like to measure other than the SB 180 elements? • Which of these elements do we currently collect? • In what form are the data (quantitative/qualitative)? • Are the data trustworthy/reliable? • Would the data be easily “corruptible” if included in accountability? • For the elements that we do not collect yet, what would it take to collect the data? • How do we envision the relationship between the input and performance system, i.e., can we offer an acceptable alternative to what is in SB180?

More Related