200 likes | 218 Views
Ensure long-term success of educational programs with impact evaluations that go beyond short-term results. Explore keys to lasting effects, causal inferences, community impact, and sustainability.
E N D
Will the magic last? • Even if an innovative educational program, a “magic bullet”, has “worked”, there is no guarantee it will continue to “work” or even be used in the future. • What kinds of evidence are needed to know that a Program makes a long-term impact?
Will the magic last? • Long term follow-up with the participants • Strength of the effects • Administrative stability / turnover • Administrative support / demands • Resources to continue the Program at all • Resources to continue the Program in its most powerful form
CIPPI Model • Context • Input • Process • Product • Impact • The goal is to make INFORMED judgments about Program merit or value.
The Nature of Evaluation Tasks • The goal is to make INFORMED judgments about Program merit or value. • Making judgments requires a thorough understanding of: • How long do the effects last? • What caused the effects in the first place? • Can the Program continue?
Impact Evaluation • The central focus of an Impact Evaluation is to extent beyond Product Evaluation – an emphasis on the accomplishment of specific, short-term objectives - to a focus on the long-term Impact of the Program. • If the Product Evaluation demonstrated that the Program is meeting its objectives, Impact Evaluation may focus on higher orderquestions.
Higher Order Questions • Lasting Impact • Breadth and Depth of the Impact • Causal Inferences • Attributing Change to Training Programs • Community Impact • Sustainability
Lasting Impact • Is it reasonable to expect the Program effects to persist past Program participation? • Are there resources available to include follow-up in the evaluation plan? • This kind of study may take place after short term studies have shown Program effects.
Lasting Impact • Have the participants been followed after completion of the Program? • How long do the effects of the Program last? • What does the Program expect / believe / claim will happen for participants in the long run?
Breadth and Depth of the Impact • Are Program participants able to show the benefits of participation in multiple settings, or across various aspects of their lives? • For example, if children who have participated in the Program exhibit more positive social behaviors at school, do they do so at home as well?
Causal Inferences • What causal inferences can be made about the effects of the Program from the Product Evaluation evidence? • Can the results of the Product Evaluation be attributed to the Program and only the Program? • Have the results been replicated across service delivery cycles?
Causal Inferences • A controlled, experimental, comparative study is necessary if the stakeholders are interested in causality. • Cost considerations • Design feasibility considerations – control group, random assignment, etc.
Change Due to Training • If the evaluation is focusing on results of training or professional development, it becomes complex and difficult to connect changes and effects back to training. • Why?
Change Due to Training • Administrative support • Administrative expectations for implementation • Technical assistance and follow-up • Interactions between the new and innovative strategies from training and what was already in place • Implementation fidelity
Change Due to Training • If the stakeholders want casual inferences made about the effects of training, all four of Kirkpatrick’s levels of evaluation need to be used. • Reaction • Learning • Behavior • Results
Community Impact • Is the program reaching all of the targeted population? • Has the Program impacted any positive changes in the community context within which it operates? • Does the community know about / trust the Program and its benefits?
Community Impact • To what extent is the Program effective in communicating with the targeted population about Program effects and opportunities to participate? • How effective are the recruitment, marketing, and reporting strategies of the Program?
Sustainability • Is the Program sustainable? • What resources will be needed to continue, expand, and strengthen the Program? • Can the Program be replicated in another service delivery cycle? • Can the Program be transported to other settings?
Evaluation Strategies • The same methods as Product Evaluation • Tracking Program participants beyond the treatment • Obtain informed consent to track at the beginning • Multiple Informants – parent surveys, etc. • Experimental methods • Kirkpatrick model
Summative Conclusions • Make a summary judgment about what Program outcomes can be expected to last. • Help the Program revisit its mission, goals, and objectives. In light of the cumulative weight of the evaluation evidence, are there any suggested revisions?