1 / 18

Building Evaluation Capacities Which Way Forward?

Building Evaluation Capacities Which Way Forward?. Krishna D. Rao Public Health Foundation of India July 23 2010. What focus areas? Who should benefit? What needs? How – Formative research Needs assessment KAP studies Expert opinion. The Project Cycle.

dian
Download Presentation

Building Evaluation Capacities Which Way Forward?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building Evaluation CapacitiesWhich Way Forward? Krishna D. Rao Public Health Foundation of India July 23 2010

  2. What focus areas? • Who should benefit? • What needs? • How – • Formative research • Needs assessment • KAP studies • Expert opinion The Project Cycle • Evaluate performance and impact • Objectives achieved? • Implementation problems PROJECT CYCLE • Project objectives • How to achieve objectives? • Plan implementation • Budget • Measure project performance • M&E plan Implement project activities

  3. The M & E framework (logic model) Source: Bertrand, J. Open Course Ware, Johns Hopkins University.

  4. M & E – Some definitions • Evaluation measures how well program activities have met program objectives and/or the extent to which changes in program outcomes can be attributed to the program. (source: MEASURE Evaluation 2007) • Impact evaluation – attempts to measure the contribution of the program to any observed changes in the program outcomes i.e. demonstrate cause and effect • Typically involves collecting data at the beginning and end of the project

  5. Impact Evaluation • Program managers, policy makers, donors want to know: • Did this program make a difference or achieve its objectives? • How large a difference did the program make ? • Did everyone get the same benefit from the program ? • Did the program cause the desired/observed change?

  6. Program Impact Outcome The Evaluation Question: How much of this change is due to the program? Program midpoint or end Program start Time

  7. Adjective: relating to or expressing what has not happened or is not the case. Causality and Experimental Designs In Non-Random designs:

  8. Random allocation Experimental study design Idea behind randomization is to ensure that all groups (experimental and control) have the same characteristics. This allows the control group to behave like the experimental group but without the intervention.

  9. Experimental study design • Not possible to use for typical health programs where whole populations are targeted for coverage. • Political issues (e.g. people in non-intervention group might want the intervention) • Generalizability (external validity) is low - results may be different in ‘normal’ (i.e. non-experimental) conditions. • Can be unethical in some cases • When randomization is not feasible, maybe possible to construct a comparison group which is similar to the intervention group so that we can make valid conclusions about program effects i.e. quasi experiments.

  10. Impact Evaluation - Non-equivalent control group In this design, the experimental and control groups can have different characteristics since no random allocation. To get unbiased effect of the program, these differences need to be controlled for in the analysis.

  11. Does this experimental/quasi-experimental approach to evaluating program impact produce useful lessons for policy and practice?

  12. The Realist Evaluation (Pawson and Tilley 1997) approach disagrees with this: • For policy purposes it is not very useful to ask “What works?” i.e. the question a typical impact evaluation seeks to answer. • More useful to ask “What works, for whom and under what circumstances” • - knowledge of how interventions produce varying impacts in different circumstances enables policy makers to better decide which policy to implement where.

  13. Realist evaluation stresses four key linked concepts for • explaining and understanding programs: • (1) Mechanism - Mechanisms describe what it is about programs that bring about any effects (expected and unexpected) • Need a ‘program theory’ to postulate a series of hypothesis on how and the ways in which the program might have an effect on outcomes.

  14. (2) Context – features of the conditions in which programs are introduced that are relevant to the operation the program mechanisms • Utilizes contextual thinking to address the issues of ‘for whom’ and ‘in what circumstances’ a program will work. • What is contextually significant may not only relate to location but also to systems of interpersonal and social relationships, and even to biology, technology, economic conditions e.g. management, worker motivation

  15. (3) Outcome patterns - comprise the intended and unintended outcomes of programs, resulting from the activation of different mechanisms in different contexts. - Because of relevant variations in context and mechanisms thereby activated, any program is liable to have mixed outcome patterns. - Verdict on program not on one single outcome but on a range of output and outcome measures Context-mechanism outcome pattern configurations (CMOCs) -Realist evaluation develops and tests CMOC conjectures empirically. - How programs activate mechanisms amongst whom and in what conditions, to bring about alterations in behavioral or event or state regularities

  16. How is realist evaluation different: • No particular preference for either quantitative or qualitative methods – merit and usefulness in both approaches. • - Capacity in both quantitative and qualitative methods important • To develop program theories, various sources are used – documents, program architects, practitioners, previous evaluation studies and social science literature. • Stakeholders are regarded as key sources for eliciting program theory and providing data on how the program works i.e. uses internal program knowledge. • Significant departure from way evaluation is typically conducted. Involves all stakeholders in the process and so will be perceived as participatory rather than imposed.

  17. THANK YOU

  18. Treated / Exposed Not treated / Not exposed Causality and Experimental Designs Treatment Effect - = Y (i,1) E.g. Y = Health status Y (i,0) E.g. Y = Health status

More Related