1 / 26

Guidelines for Evaluation Planning: Clarifying the Evaluation Request and Responsibilities

Guidelines for Evaluation Planning: Clarifying the Evaluation Request and Responsibilities. Dr. Suzan Ayers Western Michigan University (courtesy of Dr. Mary Schutten). Individuals who affect or are affected by an evaluation study.

alynn
Download Presentation

Guidelines for Evaluation Planning: Clarifying the Evaluation Request and Responsibilities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Guidelines for Evaluation Planning: Clarifying the Evaluation Request and Responsibilities Dr. Suzan Ayers Western Michigan University (courtesy of Dr. Mary Schutten)

  2. Individuals who affect or are affected by an evaluation study Sponsor Agency/individual who authorizes the evaluation and provides necessary resources for its conduct Client Agency/individual requesting evaluation Stakeholders Those who have a stake in the program or in the evaluation’s results Audiences Individuals, groups, agencies who have an interest in the evaluation and receive its results

  3. Understanding reasons forinitiating evaluation • Did a problem prompt the evaluation? • Did some stakeholder demand it? • Who has the need to know? • What does s/he want to know? Why? • How will s/he use the results?

  4. It is not uncommon for the clients to be uninformed about evaluation procedures and to have not given deep thought about the ramifications • Frequently, the purpose is not clear until the evaluator has carefully read the relevant materials, observed the evaluation object, and interviewed stakeholders

  5. Questions to begin • Why is this evaluation being requested? What questions will it answer? • To what use will the evaluation findings be put? By whom? What others should receive the information? • What is to be evaluated? What does it include? Exclude? During what time period? In what settings? Who will participate?

  6. What are the essential program activities? How do they link with the goals and objectives? • How much time and money are available for the evaluation? Who can help with it? • What is the political climate and context surrounding the evaluation? Will any political factors and forces interfere in gaining meaningful and fair information?

  7. Informational uses of evaluation • Determine whether sufficient need exists to initiate a program and describing the target audience (part of needs assessment) • Assist in program planning by identifying potential program models/activities to achieve certain goals (part of needs assessment) • Describe program implementation and whether changes from the initial model have occurred (monitoring or process study) • Examine whether certain goals are being achieved at desired levels (outcome study) • Judge overall value of a program (cost effectiveness study)

  8. Noninformational uses • Decision postponement • Ducking responsibility (already know decision but need to make it look good) • Public relations (justify the program) • Fulfilling grant requirements These are typically more common in federal or national evaluations

  9. Conditions under which evaluation studies are inappropriate • Evaluation would produce trivial information • One-time effort • Low impact program • Evaluation results will not be used • D.A.R.E. programs for example • Needs to be a commitment to use the results • Cannot yield useful, valid information • A bad evaluation is worse than no evaluation at all

  10. Evaluation is too soon for the stage of the program • Premature summative evaluations are among the most insidious misuses of evaluation (e.g.: fitness program evaluation in first six weeks will not yield meaningful information) • Motives of the evaluation are improper • Ethical considerations, “hatchet jobs” • See attributes of an ethical evaluation (http://www.wmich.edu/evalctr/jc/)

  11. Appropriateness-major steps • Use a tool called evaluability assessment • Clarify the intended program model or theory • Examine the program implementation to determine whether it matches the program model and could achieve the program goals • Explore different evaluation approaches to match needs of stakeholders • Agree on evaluation priorities and intended uses of the study

  12. Determining evaluability • Personal interviews with stakeholders • Review existing program documentation • Site visits

  13. Who will evaluate? Does the potential evaluator have the… • ability to use methodologies and techniques needed in the study? • ability to help articulate the appropriate focus for the study? • management skills to carry out the study? • ability to usefully communicate results to audiences? • integrity to maintain proper ethical standards?

  14. Internal program knowledge, familiarity with stakeholders, history, continue in advocacy role after evaluation, quick start up, known quantity External impartial, credible, expertise, fresh look, more likely to obtain sensitive inside information, more likely to realistically present results (particularly if unpopular) and advocate change

  15. Combination • Internal provides contextual information, collects majority of data, serves as advocate and support after external gone • External designs evaluation, selects/develops instruments, directs data collection, organizes report, ensures impartiality/credibility • Best of both worlds when internal and external evaluation teams collaborate

  16. Analyzing Resources- Personnel • Can the evaluator use staff on site? • Program staff: collect data • Secretaries: type, search records • Grad students: internshipS, course-related work • PTA: bodies, ideas, contacts, etc. All these can help with evaluation with no added cost to the budget

  17. Analyzing Resources- other, constraints • The more information that must be generated by the evaluator, the costlier the evaluation • Are existing data, records, evaluations, and other documents available? • Are needed support materials: testing programs, computer services, etc… already in existence or must they become part of the budget? • Time: knowing when to be ready with results is part of good planning • Limited time can lessen evaluation impact as much as limited dollars

  18. Phases of Identifying andSelecting Questions • Divergent phase= a comprehensive “laundry list” of potentially important questions and concerns [many sources, all questions are listed] • Convergent phase= evaluators select from the “laundry list” the most critical questions to be answered • Criteria are developed after the convergent phase

  19. Divergent Phase Sources • Questions, concerns, values of stakeholders • Clients, sponsors, participants, affected audiences • Policy makers, managers, primary consumers, secondary consumers • What is their perception of the program? What are their questions/concerns? How well do they think it is doing?

  20. Stakeholder Interview Questions • What is your general perception of the program? What do you think of it? • What do you perceive as the purposes? • What concerns do you have about the program? Outcomes? Operations? • What major questions would you like the evaluation to answer? Why? • How could you use the information provided by these questions?

  21. Divergent Phase Sources • Use of evaluation models/approaches Consumer-oriented: checklists and sets of criteria to help determine what to study Expertise-oriented: standards and critiques that reflect the view of the experts in the field Adversary-oriented: look for both strengths and weaknesses of the program

  22. Professional standards, checklists, instruments, and criteria developed or used elsewhere • Standards for practice exist at both the state and national level in physical education • Views and knowledge of expert consultants • Expertise in the content area may provide a neutral and broader view • They can be asked to generate a list of questions and can identify previous evaluations of similar programs

  23. Matrix for Selecting Questions Would the evaluation question…. be of interest to key audiences? reduce present uncertainty? yield important information? be of continuing, not fleeting, interest? be critical to the study’s scope? have an impact on the course of events? be answerable in terms of $$, time, tech?

  24. Convergent Phase • Sit down with sponsor and/or client and review the laundry list and the items marked as “doable” • Reduce the list via consensus • Provide the new list with a short explanation indicating why each is important and share with stakeholders

  25. Criteria and Standards • Developed to reflect the degree of difference that would be considered meaningful enough to adopt the new program. • Absolute Standard = a defined level is met or not met • Typically state DOE require absolute standards • Learn range of expectations [stakeholders] and determine standards from that • Relative Standard = comparison to other groups or standards

  26. Criteria and Standards • Flexible • Allow new question, criteria, and standards to emerge • Each question needs its own standards and criteria Remember, the goal for this step is to lay the foundation to create a meaningful and useful evaluation

More Related