1 / 23

The Program Assessment Rating Tool (PART)

The Program Assessment Rating Tool (PART). Mary Cassell Office of Management and Budget April 28, 2011. Overview. What is the PART? How was it developed? What are the components? Quality controls How was the PART used? Budget Program Improvements Lessons learned. “In God we trust…

odina
Download Presentation

The Program Assessment Rating Tool (PART)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Program Assessment Rating Tool (PART) Mary Cassell Office of Management and Budget April 28, 2011

  2. Overview • What is the PART? • How was it developed? • What are the components? • Quality controls • How was the PART used? • Budget • Program Improvements • Lessons learned

  3. “In God we trust… …all others, bring data.” -W. Edwards Deming

  4. Introduction • The PART was a component of the Bush Administration’s Management Agenda that focused on Budget and Performance Integration • The PART promoted efforts to achieve concrete and measurable results • The PART supported program improvements CJ

  5. What is the Program Assessment Rating Tool (PART)? • A set of questions that evaluates program performance in four critical areas: • Program Purpose and Design • Strategic Planning • Program Management • Program Results and Accountability • A tool to assess performance using evidence • Provides a consistent, transparent approach to evaluating programs across the Federal government CJ

  6. Why PART? • Measure and diagnose program performance • Evaluate programs in a systematic, consistent, and transparent manner • Inform agency and OMB decisions on resource allocations • Focus on program improvements through management, legislative, regulatory, or budgetary actions • Establish accountability for results

  7. How did the PART work? • Answers to questions generated scores which are weighted to tally to a total score. • Based on evidence, evaluations, and data • Ratings based on total scores: Effective, Moderately Effective, Adequate, Ineffective. • Results Not Demonstrated assigned to programs that do not have performance measures or data, regardless of overall score.

  8. PART Questions and Process • Roughly 25-30 analytical questions; explanations and evidence are required • Standards of evidence hold programs to a high bar • Question weight can be tailored to reflect program specifics • Interactions between questions. • Yes/No answers in diagnostic sections. Four levels of answers in results section. • Collaborative process with agencies; OMB had the pen.

  9. How was the PART developed? • Designed by 12 OMB career staff, including one representative from each division • Piloted with about 60 programs • Pilot generated extensive input from agencies that resulted in several revisions – changes in scoring, elimination of a question about whether the program served an appropriate federal roles • Conducted trial runs with research institutions • Agency roll-out: • OMB training • Agency meetings • Agency trainings • Incorporation into 2002 budget decisions and materials • Development, pilot, and revision process took about 6 months, including development of guidance and training.

  10. PART Program Types • Direct Federal • Competitive Grant • Block/Formula Grant • Regulatory Based • Capital Assets and Service Acquisition • Credit • Research and Development

  11. PART Questions • Section I: Program Purpose & Design (20%) • Is the program purpose clear? • Does the program address an existing problem or need? • Is the program unnecessarily duplicative? • Is the program free of major design flaws? • Is the program targeted effectively? • Section II: Strategic Planning (10%) • Does the program have strong long-term performance measures? • Do the long-term measures have ambitious targets • Does the program have strong annual performance targets? • Does the program have baselines and ambitious targets? • Do all partners agree to the goals and targets? • Are independent evaluations conducted of the program? • Are budgets tied to performance goals? • Has the program taken steps to correct strategic planning deficiencies?

  12. PART Questions • Section III: Program Management (20%) • Does the program collect timely performance information and use it to manage? • Are managers and partners held accountable for program performance? • Are funds obligated in a timely manner? • Does the program have procedures (IT, competitive sourcing, etc) to improve efficiency? • Does the program collaborate with related programs? • Does the program use strong financial management practices? • Has the program taken meaningful steps to address management deficiencies? • Additional questions for specific types of programs. • Section VI: Program Results (50%) • Has the program made adequate progress in achieving its long-term goals? • Does the program achieve its annual performance goals? • Does the program demonstrate improved efficiencies? • Does the program compare favorably to similar programs, both public and private? • Do independent evaluations shows positive results?

  13. Performance Measures, Data and Evaluations • Strong focus on performance measures. Performance measures should capture the most important aspects of a program’s mission and priorities. • Key issues to consider: 1) performance measures and targets . 2) focus on outcomes whenever possible. 3) annual and long-term timeframes. • Efficiency measures required • Rigorous evaluations are strongly encouraged

  14. Quality Controls • The PART is a tool used to guide a collective analysis-not a valid and reliable evaluation instrument. Therefore it required other mechanisms to promote consistent application. • Guidance and standards of evidence • Training • On-going technical assistance • Consistency check • Appeals process • Public transparency

  15. How was the PART used? A Focus on Improvement • Every program developed improvement plans • Focus on findings in the PART assessments • Implementation of plans and report on progress • Reassessments occurred once the program has made substantive changes

  16. The Use of the PART in the Budget Process • Informed budget decisions (funding, legislative, and management) • Increased prominence of performance in the Budget • Increased accountability and focus on data and results

  17. Example: Migrant Education and the PART • Collaborative process between OMB and program office. • Program office provided evidence to back up PART answers (such as monitoring instruments, State data, action plans, etc.) • OMB and ED met to discuss evidence • OMB and ED shared PART drafts • ED developed follow-up actions.

  18. Migrant Education PART PART Findings: • Program is well-designed and has a good strategic planning structure • Program is well-managed • Issues relating to possible inaccuracies in the eligible student count are being addressed • States are showing progress in providing data and in improving student achievement • Results section: • Ensure all States report complete and accurate data • Continue to improve student achievement outcomes • Improve efficiencies, in particular in migrant student records transfer system • Complete a program evaluation Areas for Improvement and Action Steps for Migrant Education • Complete national audit of child eligibility determinations • Implement and collect data on Migrant Student Information Exchange (MSIX) • Use data, in particular on student achievement, to improve performance

  19. The Process Distribution of Ratings Government-wide 45% 75% 55% 25%

  20. The Process Department of Education Cumulative Ratings

  21. Lessons Learned • Pros • Focus on results, data, performance measurement, evaluation • Program improvements • Common analysis • Transparency • Cross-program and cross-agency comparisons between similar programs • Identification of best practices • Informed budget descisions

  22. Lessons Learned • Cons • Not consistent enough to allow trade-offs between unlike programs • Better for program improvement than accountability, unless coupled with strong evaluation • Became too burdensome • Not fully embraced by agencies or Congress

More Related