1 / 89

AUTOPSY OF A FAILED EVALUATION

AUTOPSY OF A FAILED EVALUATION. Examination Against The 2011 Program Evaluation Standards Daniel L. Stufflebeam 9/13/11. FOCUS:. The 2011 Program Evaluation Standards. THE SESSION’S PARTS ARE. A rationale for evaluation standards

chickoa
Download Presentation

AUTOPSY OF A FAILED EVALUATION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AUTOPSY OF A FAILED EVALUATION Examination Against The 2011 Program Evaluation Standards Daniel L. Stufflebeam 9/13/11

  2. FOCUS: • The 2011 Program Evaluation Standards

  3. THE SESSION’S PARTS ARE • A rationale for evaluation standards • A case showing the utility of The Program Evaluation Standards • The contents of The Standards • Recommendations for applying The Standards

  4. PART 1 A RATIONALE FOR EVALUATION STANDARDS

  5. STANDARDS FOR EVALUATIONS ARE • Widely-shared shared principles for guiding and judging the conduct and use of an evaluation • Developed & approved by experts in the conduct & use of evaluation

  6. STANDARDS FOR EVALUATIONS PROVIDE • Principled Direction • Technical Advice • A Basis for Professional Credibility • A Basis for Evaluating Evaluations • A Basis for Public Accountability

  7. EVALUATORS IGNORE OR FAIL TO MEET STANDARDS TO • Their professional peril • The detriment of their clients

  8. As in the famous ENRON debacle, failure to meet standards may contribute to • Lack of an impartial perspective • Erroneous conclusions • Unwarranted decisions • Cover-up of findings • Misguided decisions • Breakdown of trust • Organizational repercussions • Personal losses & tragedies • Lowered credibility for evaluators, their organizations, & the evaluation profession • Increased government controls

  9. PART 2: A CASE A UNIVERSITY’S REVIEW OF ITS GRADUATE PROGRAMS (A university group should have followed evaluation standards but didn’t.)

  10. Board had voted confidence in the president (12/06) Faculty gave president & provost low ratings (2/07) Enrollment was declining U. faced a fiscal crisis Review focused on resource allocation Morale was low CONTEXT WAS PROBLEMATIC

  11. REVIEW’S STATED PURPOSES: • Address a fiscal crisis over the university’s inability to support all of its programs & maintain excellence • Determine which programs are highest strategic priorities based on quality • Identify programs for increased funds

  12. SCOPE OF THE REVIEW • To be completed within 1 year • All masters and doctoral programs • Launched on 7/19/06 • 114 programs were reviewed

  13. Keyed to Dickeson book (chapter 5) Data book Program’s report Dean’s report Review team’s report Appeals of review team’s report Provost’s final report Board’s decisions No update of U. mission No appeals of provost’s conclusions No adoption of standards for reviews Minimal participation of outside evaluators No external metaevaluation or peer review of the review THE REVIEW’S PLAN

  14. GENERAL REVIEW CRITERIA • External demand • Quality of student & program outcomes • Quality of program administration & planning • Program size, scope, & productivity • Program impact, justification, & essentiality • Opportunity analysis • Compelling program factor (features that make it unique & excellent)

  15. DEFINITION OF SUB-CRITERIA • Many • Evolved throughout the review • Caused confusion & controversy

  16. CRITERIA OMITTED FROM DICKESON’S LIST • History, development, & expectations of the program • Internal demand for the program • Quality of program inputs & processes • Revenue & other resources generated • Program costs & associated costs

  17. EVALUATION PROCEDURES • Program’s self-report • Document & data book review • Group & individual interviews • Variable protocols for ratings (1-5) • Training of review team leaders • Rating of each program by department, dean, review team, & provost • Synthesis by provost & staff

  18. REVIEW PERSONNEL • Essentially internal • Provost was both primary decision maker & de facto lead evaluator • Provost’s staff assisted the process • A program representative wrote the program’s report & sent it to department faculty, dean, & review team • Faculty input varied across programs • The dean rated the college’s programs & sent reports to the department chairs & review team (not in original plan)

  19. REVIEW PERSONNEL (continued) • Seven 7-person review teams rated designated programs & on the same day e-mailed all reports to the provost & to pertinent deans & department chairs • Review team members were mostly from outside the program’s college • Provost met with deans before finalizing decisions • Provost met with team leaders before releasing final report • An internal evaluation expert assisted

  20. FINAL REPORT • Issued on May 11, 2007 • Gave priorities for funding in each college • Announced plans to maintain 56, increase 16, merge 6, maintain/merge 17 subject to review, transfer 8, close 26, & create 6 new degrees

  21. FINAL REPORT (continued) • Gave no evidentiary basis for decisions • Referenced no technical appendix • Referenced no accessible files of supporting data, analyses, & data collection tools • Gave no rating of each program on each criterion & overall

  22. OUTCOMES • Local paper applauded the report (5/12/06) • Review evidence & link to conclusions were inaccessible to many interested parties • Professors, alumni, & others protested • President announced an appeal process (5/18/07) • Faculty voted to call for a censure of the provost (5/18/07) • Provost resigned (5/20/07) • Appeals overturned 10 planned cuts (7/14/07)

  23. OUTCOMES (continued) • Potential savings from cuts were reduced • Community watched a contentious process • Board fired the president (8/15/07) • President threatened to sue • Board awarded ex-president $530,000 severance pay (10/27/07) • Projected review of undergraduate programs was canceled, ceding that area priority by default • Reviews were scheduled to resume in 2010

  24. CLEARLY, THIS EVALUATION FAILED • No standards were required to reach this conclusion. • However, adherence to approved standards might have prevented the review’s failure.

  25. MY TAKE-ON THE PLUS SIDE: • Review was keyed to an important need to restructure programs. • There was significant faculty involvement in studying programs. • General criteria were established.

  26. HOWEVER, THERE WERE SERIOUS DEFICIENCIES. • No independent perspectives • Top evaluator & decision maker were the same • Evidence to support conclusions was not reported • Political viability was not maintained • Evidence disappeared • No independent evaluation of the review

  27. PART 3 The Program Evaluation Standards

  28. FOR A MORE SYSTEMATIC EXAMINATION OF THE CASE • Let’s see if use of The Program Evaluation Standards might have helped ensure the study’s success. • Let’s also use the case to develop a working knowledge of The Program Evaluation Standards.

  29. FIRST, SOME BACKGROUND INFORMATION

  30. THE JOINT COMMITTEE ON STANDARDS FOR EDUCATIONAL EVALUATION • Developed The Program Evaluation Standards • Includes evaluation users and experts • Was sponsored by 17 professional societies

  31. Accreditation officials Administrators Curriculum specialists Counselors Evaluators Rural education Measurement specialists Policymakers Psychologists Researchers Teachers Higher education THE SPONSORS REPRESENTED

  32. The Program Evaluation Standards • Are accredited by the American National Standards Institute • As an American National Standard • Include 30 specific standards

  33. NOW, LET’S LOOK AT • THE CONTENTS OF THE STANDARDS & • DISCUSS THEIR RELEVANCE TO THE PROGRAM REVIEW CASE

  34. THE 30 STANDARDS ARE ORGANIZED AROUND 5 ATTRIBUTES OF A SOUND EVALUATION • UTILITY • FEASIBILITY • PROPRIETY • ACCURACY • EVALUATION ACCOUNTABILITY :

  35. EACH STANDARD INCLUDES CONSIDERABLE DETAIL • Label • Summary statement • Definitions • Rationale • Guidelines • Common errors to avoid • Illustrative case

  36. CAVEAT • Time permits us to deal with the 30 standards only at a general level. • You can benefit most by studying the full text of the standards.

  37. THE UTILITY STANDARDS • Require evaluations to be • Informative • Timely • Influential • Grounded in explicit values • Intended to ensure an evaluation • Is aligned with stakeholder needs • Enables process and findings uses and other appropriate influence

  38. U1 Evaluator Credibility U2 Attention to Stakeholders U3 Negotiated Purposes U4 Explicit Values U5 Relevant Information U6 Meaningful Processes and Products U7 Timely and Appropriate Communicating and Reporting U8 Concern for Consequences and Influence LABELS FOR THE UTILITY STANDARDS ARE

  39. Let’s consider some of the specific Utility standards

  40. THE U1 EVALUATOR CREDIBILITY STANDARD STATES: • Evaluations should be conducted by qualified people who establish and maintain credibility in the evaluation context. • How well did the program review meet this standard?

  41. THE U2 ATTENTION TO STAKEHOLDERS STANDARD STATES: • Evaluations should devote attention to the full range of individuals and groups invested in the program and affected by its evaluation. • How well did the program review meet this standard?

  42. THE U4 EXPLICT VALUES STANDARD STATES: • Evaluations should clarify and specify the individual and cultural values underpinning purposes, processes, and judgments. • How well did the program review address this standard?

  43. THE U8 CONCERN FOR CONSEQUENCES AND INFLUENCE STANDARD STATES: • Evaluations should promote responsible and adaptive use while guarding against unintended negative consequences and misuse. • Did the program review case meet this standard?

  44. OVERALL, BASED ON THIS SAMPLING OF UTILITY STANDARDS • Did the program review pass or fail the requirement for utility? • Why or why not?

  45. DID FAILURE TO MEET ANY OF THESE UTILITY STANDARDS CONSTITUTE A FATAL FLAW? • If yes, which failed standard(s) constituted a fatal flaw? • What could the provost have done to ensure that the review passed the Utility requirements?

  46. NOW, LET’S CONSIDER THE FEASIBILITY STANDARDS

  47. THE FEASIBILITY STANDARDS • Are intended to ensure that an evaluation is • Economically and Politically Viable • Realistic • Contextually sensitive • Responsive • Prudent • Diplomatic • Efficient • Cost Effective

  48. LABELS FOR THE FEASIBILITY STANDARDS ARE F1 Project Management F2 Practical Procedures F3 Contextual Viability F4 Resource Use

  49. LET’S CONSIDER THE F2 FEASIBILITY STANDARD.

  50. THE F2 PRACTICAL PROCEDURES STANDARD STATES: • The procedures should be practical and responsive to the way the program operates. • Did the program review employ workable, responsive procedures?

More Related