1 / 17

Evaluation Planning II: Setting Boundaries and Analyzing the Evaluation Context

Evaluation Planning II: Setting Boundaries and Analyzing the Evaluation Context. Dr. Suzan Ayers Western Michigan University (courtesy of Dr. Mary Schutten). Four considerations. Identifying evaluation audiences Setting boundaries on whatever is evaluated Analyzing evaluation resources

tvalencia
Download Presentation

Evaluation Planning II: Setting Boundaries and Analyzing the Evaluation Context

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation Planning II: Setting Boundaries and Analyzing the Evaluation Context Dr. Suzan Ayers Western Michigan University (courtesy of Dr. Mary Schutten)

  2. Four considerations • Identifying evaluation audiences • Setting boundaries on whatever is evaluated • Analyzing evaluation resources • Analyzing the political context

  3. 1. Audience Identification • Evaluation is adequate only if it collects information from and reports to all legitimate evaluation audiences • Primary Audience: sponsor and client • Secondary audiences: depends on how the evaluator defines constituents • Common to limit to too narrow an audience • Figure 11.1 (p. 202) • Return to list of audiences periodically • Who will use results and how is key to outlining study

  4. Potential Secondary Audiences • Policy makers • Managers • Program funders • Representatives of program employees • Community members • Students and their parents (or other program clients) • Retirees • Reps of influence groups

  5. 2. Setting the Boundaries • Start point: detailed description of the program being evaluated • Program description: describes the critical elements of the program (goals, objectives, activities, target audiences, physical setting, context, personnel) • Need for description to be thorough enough to convey program’s essence

  6. Characterizing the Evaluand • What problem was program designed to correct? • Of what does the program consist? • What is the program’s setting and context? • Who participates in the program? • What is the program’s history? Duration?

  7. When and under what conditions is the program implemented? • Are there unique contextual events (contract negotiations, budget, elections…) that may distort evaluation? • What resources (human, materials, time) are consumed by the program? • Has there been a previous evaluation?

  8. Program Theory • Specification of what must be done to achieve desired goals, other impacts may be anticipated, & how goals & impacts would be generated (Chen, 1990) • Serves as a tool for: • Understanding program • Guiding evaluation • Evaluators must understand assumptions that link problem to resolve with program actions & characteristics & those a/c with desired outcomes

  9. Helpful in developing program theory (Rossi, 1971) 1. Causal hypothesis: links problem to a cause 2. Intervention hypothesis: links program actions to the cause 3. Action hypothesis: links the program activities with reduction of original problem Sample Problem • Declining fitness levels in children • Causal hypothesis? • Intervention hypothesis? • Action hypothesis?

  10. Methods for Describing Evaluand • Descriptive Documents • Program documents, proposals for funding, publications, minutes of meetings, etc… • Interviews • Stakeholders, all relevant audiences • Observations • Observe program in action, get a “feel” for what really is going on • Often reveal difference between how program runs and how it is supposed to run

  11. Challenge of balancing different perspectives • Minor differences may reflect stakeholder values or positions and can be informative • Major differences require that evaluator attempt to achieve some consensus description of the program before initiating the evaluation • Redescribing evaluand as it changes • Changes may be due to • Responsiveness to feedback • Implementation not quite aligned with designers’ vision • Natural historical evolution of an evaluand

  12. 3. Analyzing Evaluation Resources: $ • Cost-free evaluation: cost savings realized via evaluation may pay for evaluation over time • If budget limits are set before the evaluation process begins, it will affect planning decisions that follow • Often evaluator has no input into the budget • Offer 2-3 levels of services (Chevy vs. BMW) • Budgets should remain somewhat flexible to allow for evaluation process to focus on new insights during the process

  13. Analyzing Resources- Personnel • Can the evaluator use ‘free’ staff on site? • Program staff could collect data • Secretaries type, search records • Grad students doing internship, course-related work • PTA • Key that evaluator ORIENT, TRAIN, QC such volunteers to maintain evaluation’s integrity • Supervision and spot-checking useful practices • Task selection is essential to maintain study’s validity/credibility

  14. Analyzing Resources:Technology, others, constraints • The more information that must be generated by the evaluator, the costlier the evaluation • Are existing data, records, evaluations, and other documents available? • Using newer technology, less expensive means of data collection can be employed • Web-based surveys, e-mails, conference calls, posting final reports on websites • Time (avoid setting unrealistic timelines)

  15. 4. Analyzing the Political Context • Politics begin with decision to evaluate and influence entire evaluation process • Who stands to gain/lose most from different evaluation scenarios? • Who has the power in this setting? • How is evaluator expected to relate to different groups? • From which stakeholders will cooperation be required? Are they willing to cooperate? • Who has vested interest in outcomes? • Who will need to be informed along the way? • What safeguards need to be formalized (i.e., IRB)?

  16. Variations Caused byEvaluation Approach Used • Variations in the evaluation plan will occur based on the approach taken by the evaluator • Each approach has strengths and limitations • Review Table 9.1 for characteristics of each • Use of single approaches tends to be limiting

  17. To Proceed or Not? • Based on information about context, program, stakeholders & resources, decide ‘go/no-go’ • Ch. 10 inappropriate evaluation conditions: • Evaluation would produce trivial information • Evaluation results will not be used • Cannot yield useful, valid information • Evaluation is premature for the stage of the program • Motives of the evaluation are improper • Ethical considerations (utility, feasibility, propriety, accuracy)

More Related