1 / 59

Program Evaluation, Research Design, and the Logic Model

Program Evaluation, Research Design, and the Logic Model. Research Methods for Public Administrators Dr. Gail Johnson Adapted, with additional material, by Dr. Mario Rivera for PADM 552, Designing Applied Research . Multiple Methods—Mixed Methods.

brooks
Download Presentation

Program Evaluation, Research Design, and the Logic Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Program Evaluation, Research Design, and the Logic Model Research Methods for Public Administrators Dr. Gail Johnson Adapted, with additional material, by Dr. Mario Rivera for PADM 552, Designing Applied Research

  2. Multiple Methods—Mixed Methods • Combination and Integration of Quantitative and Qualitative Methods • Neither is inherently better • Realities of the situation rule • Each work well in some situations, less well in others • Arguably, all research is qualitative at its inception (choosing and defining the research question) and at its end (interpreting, evaluating findings); most program evaluations and other applied research involve multi-method triangulation (triangulation of both method and data) • Quantitative and qualitative data collection often used together—mixed-methods approaches • Available data with surveys • Surveys with observations • Pre-post data analysis (for instance, using t-tests) • Surveys with focus groups

  3. The Hypothesis Dilemma Hypotheses are designed to express relationships between or among variables. They are testable propositions that formally state theoretical expectations about these relationships among. If this is the nature of your topic or question, a hypothesis can add to your research. However, if your topic is more descriptive or explorative and analytical, generating a hypothesis may not be appropriate. A hypothesis may not be appropriate if: • You have not identified and articulated a particular theoretical construct • You do not have a set of defined variables. • Your question turns on phenomenological description • Your question entails an analytical review of the problem posed • Your question involves in whole or part the study of a cultural group • You will both engage in and research the program in question

  4. What to Evaluate? • Projects: a single intervention in one location or a single project implemented in several locations. • Programs: an intervention comprised of various activities or projects which are intended to contribute to a common goal. • Organizations: multiple intervention programs delivered by an organization. While seldom done, it is possible and sometimes desirable to evaluate an entire organization, or a complex of programs essentially defined by and defining an entire organization—e.g., the curriculum of a graduate program, its organizational core, without necessarily looking at all operational elements of the organization.

  5. When to Evaluate? • Before program starts: • To improve design • During the implementation: • To improve implementation • Identify barriers to be removed • Lessons learned about implementation • To assess a critical element of a program under review, for instance a risk-management training component of an equal opportunity office in a public agency.

  6. When to Evaluate • Mid-term evaluation • Relevance, effectiveness, efficiency • Lessons learned: management tool • Impact evaluation (versus outcome evaluation) • Either at the end of the project or a few years after the program has been operating: assessing a mature program • Can also look at: effectiveness, efficiency, early signs of impact and sustainability • Lessons learned for future projects or programs • Distinction between short- and medium-term outcomes or results versus longer-term impacts. An impact is the net, long-term effect or complex of effects ensuing from a program. Gross impacts are affected by other causes, programs, secular changes (e.g., in attitudes toward smoking). If one can gauge gross impacts (the longer after the program the more the exogenous influences), can one isolate and evaluate net program impacts?

  7. Why Is Evaluation Useful? • Feedback • Accountability • Learning • Improvement • Results • Testing underlying assumptions or theory (change model or theory of change, action model or action theory) • Funding decisions—publicizing and disseminating program results

  8. Evaluation Questions

  9. Types of Evaluations • Auditing: accounting for money • Is the money being spent according to plan? • Efficiency and effectiveness. • Monitoring: measuring implementation and results • Is the intervention producing the intended results? • Process: measuring operations and service delivery • Are there problems in service delivery?

  10. Types of Program Evaluations • Feasibility evaluations • Before the program begins • Intended to improve program design • Evaluability assessments • Assesses potential usefulness of the evaluation • Used to test out different strategies for conducting an evaluation • What is doable given the situation?

  11. Evaluability Assessment • Helps to define the actual objectives, implementation and management of a program. • The actual objectives may differ from those initially planned. • Determines the coherence of the program: are goals, activities, program infrastructure linked?

  12. Evaluability Assessment Key steps in the process: • Interview key program staff to actual program mission, goals, objectives and activities. • Site visits to observe and get a sense of what is going on. • May include interviews with key stakeholders. • Observe program delivery.

  13. Evaluability Assessment • Reach agreement as to: • Whether to conduct the evaluation. • Scope and objectives of the evaluation. • The decision could be to not conduct the evaluation.

  14. Evaluability Assessment: Challenges • Key components of the program may not be well defined: • Lack of agreement on program objectives. • Lack clear, measurable indicators of performance and/or impact. • Target group may not be clearly defined. • The delivery system is poorly articulated.

  15. Types of Program Evaluations • Formative evaluations • During implementation • Feedback about operations and processes • Used to make mid-course corrections

  16. Definition: Performance Monitoring • Performance monitoring: • the continuous process of collecting and analyzing data to compare how well a project, program or policy is being implemented against expected results. • Performance measurement tells you the what of what has occurred, while program evaluation explores the why. Program evaluation requires performance data but brings it to interpretive effort eventuating in judgments of value—evaluative decisions, or evaluation. • Traditional vs. newer forms of assessment: • Traditional focus on inputs, activities and outputs. • Contemporary emphasis on if-then causal questions, incorporating change and action modeling.

  17. Types of Evaluation:Monitoring • Ongoing review: • On-time • On-budget • On-target • Linked with ongoing management • Measured against established baselines • Indicators of progress toward targets

  18. Types of Program Evaluations • Summative Evaluations • At the end of the program or after the program has been running long enough to achieve its goals (with “mature” programs) • Identify lessons learned • Other issues: unintended outcomes, program sustainability, program efficiency, costs and benefits • Sometimes called impact evaluations and ex-post evaluations

  19. Program Evaluation • Summative Evaluation Question: Do Public Programs Work? • Implied cause-effect relationship • Did the program cause a desired outcome? • Performance-based: Focus on outcomes, results, impacts, goal achievement.

  20. Differences • Formative Evaluations • Project Monitoring • Best Suited to Early Years of Implementation • Key Question: • Are we doing things right? • Have we hired the right people with the right skills? • Have we marketed the program effectively? • Have we met our strategic objectives? • Have we spent our money according to our plan?

  21. Differences • Summative Evaluations • Measuring Results or Impacts • A longer time before results or impacts are visible • Key Question: • Are we doing the right thing? • This gets back to the theory or underlying assumptions of the program: • We can do an excellent job at training people but if the problem derives from larger structural economic issues, a training program, no matter how well implemented, may show little result.

  22. Participatory Evaluation • Responsibility for planning, implementing, evaluating and reporting is shared with all stakeholders. • A partnership based on dialogue and negotiation.

  23. Participant focus and ownership Focus on learning Flexible design Rapid appraisal methods Outsiders are facilitators Donor focus and ownership Focus on accountability and judgment Predetermined design Formal methods Outsiders are evaluators Participatory Traditional

  24. Participatory Evaluation Participants: • Engage in group discussions • Conduct interviews • Conduct field workshops • Analyze data and develop conclusions • Write the report • Disseminate the evaluation results

  25. Participatory Evaluation How is it done? • No single right way • Commitment to the principles of participation and inclusion • Core assumption: Those closest to the situation have valuable and necessary information • Develop strategies to develop trust and honest communication • Information sharing and decision-making • Create even tables

  26. Participatory Evaluation • Benefits: • Increased credibility of results • Results are more likely to be used • Increased buy-in, less resistance • Increased sustainability • A tool for empowering the affected populations • More flexibility in approaches

  27. Participatory Evaluation • Challenges: • Time consuming • Clarifying roles, responsibilities and process • Skilled facilitation • Just-in time training • No predetermined evaluation plan • May be seen as less objective

  28. Participatory Evaluation • Is it the right approach for you? • Is there a need for: • An independent outside judgment? • Technical information? • Will stakeholders want to participate? • Is there sufficient agreement among the stakeholders so they can work together?

  29. Rapid Assessments Described as: “fairly-quick and fairly-clean” as opposed to “quick and dirty” or “long and clean” studies.

  30. Rapid Assessments Uses • Diagnostic evaluations: • processes, problems and causes of problems. • Trouble-shooting: • Why isn’t a program working as expected? • What can be done to improve the program?

  31. Rapid Assessments Data Collection • Observe: Patterns, land use, behavior • Converse: Talk with people, listen to their concerns and views; interviews, meetings. • Record: Write everything down. • Locate: Existing data: reports, records, maps, prior studies.

  32. Principles of Rapid Assessments • Process: • Don’t rush when gathering information • Probe and explore • Listen rather than speak • Be unimposing, open, non-judgmental • Seek out the people most likely to be overlooked and find out what concerns them

  33. Rapid Assessments: No Fixed Definition • Intended to do evaluations quickly while obtaining reasonably accurate and useful information • Uses a systematic strategy to obtain just essential information • Focus is on practical issues Sources: FAO.Org/docrep/

  34. Principles of Rapid Assessments • Optimize trade-offs: quantity, relevance, accuracy and timeliness. • Planned but ability to pursue serendipity. • Triangulation: use more than one technique and source of information. • Face-to-face, on site learning. • Learn with a general blueprint that is adapted as more information is obtained: use multiple methods, improvisation, cross-checking. • Source: FAO.Org/docrep/.

  35. Rapid Assessments • Is it the right approach for you? • Important but no major decisions based on the study. • Not controversial so face validity is sufficient. • Limited time and resources available. • Information is already available or requires in-depth understanding of process.

  36. Social Assessment • Social assessment is a process that brings relevant social information into the decision-making process for program design, implementation, monitoring and evaluation. • It assists in forming key outcome measures to be used in evaluation.

  37. Social Assessment • Tools and Approaches include: • Stakeholder analysis • Gender analysis • Participatory Rapid Appraisal • Observation, interviews, focus groups • Mapping, Analysis of Tasks, Wealth Ranking • Workshops Objective-oriented Project Planning

  38. Working with Logic Models • Visualize a program in context • Systems approach, within an environment • Identify the relationships between various components • Identify cause and effect • Identify key assumptions

  39. Models: Cause and Effect: Did the program cause something to happen? Education Employment

  40. Hierarchy of Objectives Reduced Poverty Improved Quality of Life Increased Income Job Training Unemployed • Sequencing

  41. Logic Models • The focus is on results or impacts rather than inputs and activities • We are not training people just for the sake of training people • We believe if we train the chronically unemployed, then there quality of life will be improved and poverty will decrease. • Our goal is to reduce poverty • Also called Program Outcome Model or Measuring for Results • Remember, models are not reality; avoid reification. • What makes a good model?

  42. Elements of the Logic Model Inputs: what resources are used • University inputs: budget, number of faculty, number of staff, number of buildings, number of classrooms • Activities: what the program does • University activities: teaching, research, and service

  43. Elements of the Logic Model • Outputs: the services or products produced • University outputs: number of students that graduate, number of articles and books published by faculty • Outcomes: what happened: immediate results • Graduates are sought after, get good jobs, active alumni who donate big bucks • Faculty well-known, obtain big grants, enhance rating of university

  44. Elements of the Logic Model • Impacts: the “so what.” Larger, long term results, usually tied to program goals. • A more informed and engaged citizenry, preserves democratic institutions, future leaders. Faculty research contributes to knowledge.

  45. Logic Model Logical Connections: • Inputs to do activities • Activities lead to outputs • Outputs lead to one or more outcomes • Outcomes lead to impacts

  46. Logic Model: Training Program Inputs Activities Outputs Outcomes Impacts • Products • Number of graduates per training session • % graduate rate • Benefits, changes • Increased skills • % Obtain jobs • % Obtain high paying, quality jobs • Increased self-esteem • Goals • Increased income • Self-sufficiency • Family stability • Reduction in poverty • Resources • money • staff • Supplies • mentors • Training Programs • Dress for success coaching • Interview coaching • Resume assistance

  47. Complex effects chain in partnered programs—network logic models Attribution difficulties; transparency & accountability challenges Partners 1, 2, 3, etc. Shared Common Outcomes Mario Rivera

  48. Participatory evaluation • Participatory evaluation is complex, requiring a correspondingly complex evaluative approach that can adequately deal with complex causality and interaction • The case study is one approach to evaluation that can capture such complexity, through “thick description.” Mario Rivera

  49. Logic Models • The focus is on results or impacts rather than inputs and activities, although all of these are specified, along with indicators and measures. For example, you are in effect saying that • We are not training people just for the sake of training people. • We believe if we train the chronically unemployed, then they might gain meaningful and sustainable employment. Their quality of life will be improved thereby, and with enough such results from this and other efforts poverty will decrease. • Our strategic goal is to help improve that quality of life and reduce poverty—these are the anticipated or hoped-for program impacts. • Also called the Program Outcome Model, Measuring for Results, etc. Mario Rivera

  50. Logic Models • A logic model is a systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan to undertake, and the changes or results you hope to achieve • Provides stakeholders with road map describing the sequence of related events connecting the need for the planned program with the program’s desired results • A program design and planning tool • A program implementation tool as the core of a focused management plan • Program evaluation and strategic reporting: presents program information and progress toward goals, telling the story of the program’s commitments and successes. Mario Rivera

More Related