Loading in 2 Seconds...
Loading in 2 Seconds...
Participant-oriented Evaluation Approaches: Stake’s Countenance . Emily Howard Program Evaluation and Policy Analysis. Responsive Evaluation. Grew out of dislike for mechanical and preordinate evaluation methods in the late 1960s. Characteristics include : 1. Depends on inductive reasoning
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Program Evaluation and Policy Analysis
Grew out of dislike for mechanical and preordinate evaluation methods in the late 1960s.
Characteristics include :
1. Depends on inductive reasoning
2. Uses a multiplicity of data
3. Does not follow a standard plan
4. Records multiple rather than single realities
Fitzpatrick, Sanders, Worthen 2004
Transaction: Successive engagements or dynamic encounters constituting the process of instruction. (Activities, processes, etc.) Example: Behavioral interactions.Quick Vocabulary Lesson
Outcomes: The effects of the instructional experience. (Including observations and unintentional outcomes.) Example: Teacher performance.
Antecedent: A condition existing prior to instruction that may relate to outcomes. (Inputs, resources, etc.) Example: Teacher background.
Insert Matrix HereStake and his Countenance
The ultimate test of an evaluation’s validity is the extent to which it increases the audience’s understanding of the entity that was evaluated.
Responsive evaluators in continuous communication with stakeholders.
Stresses importance of being responsive to realities in program and concerns of participants rather than relying on preconceptions.
Disinterested in formal objectives and formal data collection.What Does it Do?
Affords the evaluator information needed to analyze the levels of congruency.
Attempts to reflect the complexity of the program as realistically as possible.
Has great potential for gaining new insights and theories about the field and program it evaluates.Advantages
Evaluators look at the needs for those whom the program serves.
Purpose: “Evaluate an environmental education professional development course using Stake’s Countenance Model as the organizational framework.”
Course designed to educate teachers about research and instructional strategies used to investigate community environmental issues.Case Background
Course included laboratory procedures, data collection trips, and data analysis.
Evaluation of a Chesapeake Watershed Ecology course.
Enhanced professional confidence
Not enough time to study and reflect
Administrative barriers to implementing what they learned
Data Collection Instruments:
3. Teacher opinion survey
4. Expert opinion questionnaire
5. Attendance records
6. Background information
7. Teacher journals
8. Instructor journalEvaluation Methodology
Intent to use
Criterion levels were established to judge discrepancies between what was intended and what was observed to occur.
The table shows the outstanding
characteristics of the course.
The table compares intents to
observations and describes the
judgment standards and the
judgment of the evaluator.
Results of Evaluation:
1. Teachers were familiar with basic concepts but not advanced techniques.
2. Established importance of ties between perceived resource ability, class participation, and curricular choices.
3. Linked knowledge gains and improved professional confidence expressed by the teachers.Evaluation Results & Summary
Case study did not tackle a complex issue, hard to judge the technique.
Tool seemed well-suited to case; in education evaluation should be participant-oriented.
Would other techniques have been more or less helpful?
Is the technique more than the matrix, and is an evaluator necessary?
Did not see voice of the evaluator. Judgments largely a result of participant experience and rating.Quality of the Case Study
Does the evaluator do more than facilitate? Does the evaluator make “big picture” observations?
Some of the judgments could have possibly been culled from survey results as well.
Would different techniques have yielded different results?