1 / 67

Overview of Program Evaluation

Bringing Your Program Full Circle: Evaluation Tips and Techniques 2008 National Air Quality Conference Elizabeth Schmitz, KY Division for Air Quality Julie Ernst, University of MN Duluth. Overview of Program Evaluation. What is Program Evaluation?.

holiday
Download Presentation

Overview of Program Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bringing Your Program Full Circle: Evaluation Tips and Techniques2008 National Air Quality ConferenceElizabeth Schmitz, KY Division for Air QualityJulie Ernst, University of MN Duluth

  2. Overview of Program Evaluation

  3. What is Program Evaluation? The systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming. (Patton, 1997).

  4. Why evaluate? • Understand what is working and what isn’t (program improvement) • Make sound decisions (should the program be continued, scaled back, discontinued, enhanced?) • Justify programs, explain accomplishments • Promote programs, products, and services • Gain funding • Guide program development

  5. Importance of “Use” • Idea of using the evaluation data to improve a program or make decisions is central to evaluation • Evaluations should be conducted with a specific use for and user of the evaluation in mind

  6. Types of Evaluation based on Purpose (Intended Use) • Front-End: to guide program development (Is this program needed? How should it be designed? What should the program outcomes be?); Used by those developing the program • Formative: to guide program improvement (What is working? What needs to be improved? How it can be improved?); Generally used internally; Often occurs in early stages of program development • Summative: to guide decisions about the program’s future; Used internally and externally by key decision-makers (program staff, supervisors, funders); Often occur later in program development

  7. Formative v. Summative: “When the cook tastes the soup, that’s formative evaluation; when the guest tastes it, that’s summative evaluation.” (Robert Stake in Patton, 1997)

  8. Check for Understanding What type of evaluation do the following examples describe? • After developing a series of potential messages, the APCD held focus groups to determine what message was most likely to move Louisville residents to take action that would benefit air quality.

  9. Check for Understanding • DAQ is currently designing an online survey that will be emailed to all 6-12 grade teachers in Kentucky. The results of this survey will assess teacher needs in order to guide development of a unit of study. • Pre- and post-tests are routinely used to evaluate the learning gains of workshop participants, and tallied at the end of a year for an overall picture of program efficacy.

  10. The Evaluation Process Focus your evaluation Develop your evaluation plan Develop data collection tools Collect data Analyze data and interpret results Communicate and use the results to improve the program or make decisions

  11. Focusing your Evaluation

  12. Focusing Your Evaluation A. Identify the purpose for your evaluation (clarify uses and users) For example: The purpose of this evaluation is todetermine which EE programs support the mission of the KY DAQ, in order for the Director to make summative decisions regarding which programs to continue and which to suspend.

  13. Focusing Your Evaluation Questions to consider: • Who are your program stakeholders? • Why are you considering evaluating your program? • Who are your evaluation stakeholders? • Who is the primary intended user of your evaluation? • Specifically, how will the results be used?

  14. Focusing Your Evaluation • Identify the purpose for your evaluation (clarify uses and users) • Describe your program Introducing the logic model!

  15. What is a Logic Model? • Diagram that summarizes key elements of a program in a way that shows the relationships among program elements • Relationship between what we put in, what we do, and outcome); describes the sequence of events thought to bring about benefits or change

  16. Everyday Logic Model H U N G R Y Gather Ingredients Cook and Eat Feel Satisfied

  17. Logic Model S I T U A T I O N INPUTS OUTPUTS OUTCOMES

  18. SITUATION • The conditions that give rise to the program • What needs to be done? • What do our stakeholders want done? • What are our priorities?

  19. INPUTS • What we invest • Resources and contributions • Staff, volunteers, time, money, materials, equipment, technology, partners, facilities

  20. OUTPUTS • What we do and who we reach • Activities (training, recruitment, workshop etc.) and Products (activity guide, exhibit, curriculum, poster, etc.) • People we reach (visitors, citizens, participants, students)

  21. OUTCOMES • The results • Learning: Awareness, Knowledge, Attitudes, Skills, Opinions, Motivations • Action: Behavior, Decision-making, Social action, Policies • Ultimate Impact: Social, Economic, and Environmental Conditions

  22. Outputs v. Outcomes Output (Activity) driven: Teens volunteered an average of 10 hours over the summer in community service projects. Outcome (Impact) driven: Teens learn how to identify and solve a community need. Teens feel more responsible for their community. Outcomes answer: SO WHAT? What difference does the program make?

  23. Logic Model – 2 other pieces S I T U A T I O N INPUTS OUTPUTS OUTCOMES External Factors

  24. External Factors Context in which the program is situated and external conditions which influence the success of the program, such as: Politics Policies Demographics Economics Culture Biophysical environment

  25. Logic Model – 2 other pieces S I T U A T I O N INPUTS OUTPUTS OUTCOMES External Factors Assumptions

  26. Underlying Assumptions • Beliefs we have about the program and the way we think it will work ~the participants ~ the way the program will operate ~ how resources will be used • Faulty assumptions lead to poor results: Are your assumptions realistic and sound?

  27. Check for Understanding 13 counties may face non-attainment designation as a result of the new 8-hour ozone standard. Teachers applied their new air quality understanding in the classroom. The Office of Energy Policy awarded a grant of $350 for the purchase of CFL’s. The Clean Air for KY program reached 4,000 students. Increasing students’ knowledge about air quality through school-based outreach will encourage students to take action, like turning off the lights, at home.

  28. Focusing Your Evaluation • Identify the purpose for your evaluation (clarify uses and users) • Describe your (EE) program • Consider logistics • Available staff for the evaluation • Timeframe • Money/other resources available • Contextual or other external factors that may affect the evaluation process

  29. Developing your Evaluation Plan

  30. Evaluation Plan

  31. Evaluation Questions: 2 Phases • 1. Generate a list of potential evaluation questions. • 2. Narrow your list - would the evaluation question: • Be of interest to primary intended user? • Provide information that addresses the intended use for the evaluation results? • Contribute information that is not already known? • Be of continuing interest? • Be feasible, in terms of time, money, and skill? • Issues can emerge that require new or revised questions; be flexible, yet do not chase every new or interesting direction that emerges

  32. Indicators • Evidence or information that represents the phenomena of interest: What would indicate this program objective/outcome has been achieved? What does “success” look like? • Help you know something; they are usually specific and measurable • For each aspect you want to measure, ask: What would it look like? What kind of information is needed?

  33. Determine Sources of Information (Who will provide the data?) • Participants • Non-participants • Key informants (parents, teachers, previous participants) • Program staff, administrators, or partners • Program documents (logs, records, minutes of meetings)

  34. Determine Data Collection Tools • Tool = what you use to collect data • Choice of tool dependent on: • Intended users of and use for evaluation • Evaluation question, indicator, and source of information previously identified • Amount of time and money • Skill and philosophy of evaluator • Weighing of advantages and disadvantages

  35. Determine Data Collection Tools • Types of Data (Information) that Can Be Collected: • Qualitative– Descriptive, narrative, rich in explanation. Often collected using a smaller set of participants. Depth • Quantitative– Numerical measurement. Often collected through larger set of participants. In some cases, can be generalizable to a population. Breadth

  36. Design When would you need to collect data & from whom if you want to show: • gain or change in participants’ knowledge? • participants “outperform” another group? • participants have the desired characteristic, behavior, or knowledge after your program? • changes in participants over time? • results can be attributed to your program in a causal sense?

  37. Sampling (Who/how many?) • A sample is a subgroup of a larger group (population) • Sampling refers to the method used to select the people, classrooms, counties, etc. to study • Sampling decisions are based on population size, what you want to know, and the resources available. • First questions to ask: What is the population of interest and is sampling needed? If the population is small, you likely will include all its members.

  38. Developing Data Collection Tools

  39. Developing Data Collection Tools • Interviews or surveys? • Interview: Open-ended questions designed to elicit thoughts, feelings, experiences, and stories from respondents; no response provided; Qualitative • Survey: a list of stable and primarily closed questions; Quantitative

  40. Interviews • Create an Interview Guide • How Many Respondents? • Iterative Process

  41. Conducting Interviews • Be consistent with questions and cues! • Reassure participants that you will protect their identity • Begin and end by thanking participants • Request permission to tape • Hold in neutral, private territory where the respondent will be comfortable

  42. Focus Groups • Group interviews that encourage participants to build on each others’ responses • Guidelines for developing interviews are true for focus groups as well • Last roughly 1-2 hours and involve 6-10 people

  43. Surveys • Process large amounts of data • Best for generalizing results to a larger population • Can be used for planning, formative, and summative types of evaluation • Always pilot test your surveys!

  44. Collect Data

  45. Collect Data • First, review your logic model, evaluation focus, and planning matrix to ensure that you are on target with the data you collect • Look for existing data that may meet your needs before designing new data collection protocol

More Related