1 / 55

YALE-GRIFFIN PREVENTION RESEARCH CENTER David L. Katz, MD, MPH Valentine Yanchou Njike, MD, MPH

FUNDAMENTALS OF EVALUATION Part 2:Addressing Constraints and Weaknesses in Evaluation Design May 23, 2012. YALE-GRIFFIN PREVENTION RESEARCH CENTER David L. Katz, MD, MPH Valentine Yanchou Njike, MD, MPH Jesse Reynolds, MS. REVIEW: Specific Evaluation Steps. Engage stakeholders

barbarag
Download Presentation

YALE-GRIFFIN PREVENTION RESEARCH CENTER David L. Katz, MD, MPH Valentine Yanchou Njike, MD, MPH

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FUNDAMENTALS OF EVALUATIONPart 2:Addressing Constraints and Weaknesses in Evaluation Design May 23, 2012 YALE-GRIFFIN PREVENTION RESEARCH CENTER David L. Katz, MD, MPH Valentine Yanchou Njike, MD, MPH Jesse Reynolds, MS

  2. REVIEW: Specific Evaluation Steps • Engage stakeholders • Describe the program • Focus the evaluation design • Gather credible evidence • Justify conclusions • Ensure use and share lessons learned • Adapted from Joint Committee on Standards for Educational Evaluation. Program evaluation standards: how to assess evaluations of educational programs. 2nd ed. Thousand Oaks, CA: Sage Publications, 1994.

  3. REVIEW: PICO Model • PICO Model • Population • Who is the targeted population? • Is there a primary health problem? • Are there relevant characteristics such as age, sex or race/ethnicity? • Intervention • What does the intervention consist of? • Comparison • Is there a comparison or control group? • Outcome • What do expect to see improve or change? • How is this measured? • Operational defined?

  4. Addressing Constraints and Weaknesses in Evaluation Design • Defining the Program Theory Model • Choosing the Best Design • Critical Analyses and Comparisons • Evaluation Scenarios • Overcoming Evaluation Challenges

  5. Who uses Evaluation? • Two main users: • Evaluation practitioners • Managers, funding agencies and external consultants • The evaluation may start at: • The beginning of the project • After the project is fully operational • During or near the end of project implementation • After the project is finished

  6. Defining the Program Theory Model • Defining and testing critical assumptions are an essential (but often ignored) element of program theory models.

  7. Defining the Program Theory Model • All programs are based on a set of assumptions (hypothesis) about how the project’s interventions should lead to desired outcomes. • Sometimes this is clearly spelled out in project documents. • Sometimes it is only implicit and the evaluator needs to help stakeholders articulate the hypothesis through a theoretical model.

  8. Defining the Program Theory Model • Before an evaluation can be conducted… • It is necessary to identify the explicit or implicit theory or logic model that underlies the design upon which a project was based. • What are the expected outcomes (if applicable)?

  9. Defining the Program Theory Model • An important function of an impact evaluation is… • To test the hypothesis that the project’s interventions and outputs contributed to the desired outcomes. • WITH attention always being paid to external factors that the project assumed would prevail, were to have led to sustainable impact.

  10. Defining the Program Theory Model • Defining the program theory or logic model is good practice for any evaluation. • Strategic Planning Framework • Results Based Accountability • S.M.A.R.T • It is especially important when… • Budget, time, and other constraints are an issue. • Prioritizes what the evaluation needs to focus on.

  11. Defining the Program Theory Model • If possible or applicable… • Begin with an initial review of what a project did with regard to planning. • Was the logic was sound? • Was the project was able to do what was needed to achieve the desired impact?

  12. Defining the Program Theory Model • If the theoretical model was clearly articulated in the project plan, it can be used to guide the evaluation. • If not, the evaluator needs to construct it based on: • Reviews of project documents • Discussions with the project implementing agency, project participants, and other stakeholders

  13. Defining the Program Theory Model • In most cases… • This is an iterative process in which the design of the model evolves as more is learned during the course of the evaluation.

  14. Defining the Program Theory Model • In addition to articulating the internal cause-effect theory on which a project was designed… • A theoretical model should also identify: • Socioeconomic characteristics of the affected population groups. • Conditions that affect the target community • Economic • Political • Organizational • Psychological • Environmental

  15. Choosing the Best Design from the Available Options • Based on: • An understanding of client information needs • Required level of rigor • What is possible given the constraints • The evaluator and client need to determine what evaluation design is required and possible under the circumstances.

  16. Choosing the Best Design from the Available Options • When does the evaluation ‘begin’? • At the start of the project • During project implementation • At the end of the project

  17. Choosing the Best Design from the Available Options • When will the evaluation end? • A one-time evaluation conducted while the project is being implemented (most commonly for the midterm review) • Ending at approximately the same time as the project (end-of-project evaluation and report) • Continuing after the project ends (longitudinal or ex-post evaluation)

  18. Choosing the Best Design from the Available Options • What type of comparison will be used? • There are three main options: • A randomized design in which individuals, families, groups, or communities are randomly assigned to the project and control groups • A comparison group selected to match as closely as possible the characteristics of the project group • No type of control or comparison group

  19. Choosing the Best Design from the Available Options • Does the design include process evaluation? • Even if an evaluation is focused on measuring sustainable changes in the conditions of the target population, it needs to identify what most likely led to those changes. • That includes an assessment of the quality of a project’s implementation process. • Whether it made a plausible contribution to any measured impact.

  20. Choosing the Best Design from the Available Options • Are there preferences for the use specific approaches? • Quantitative? • Qualitatitive? • Mixed Methods?

  21. Choosing the Best Design from the Available Options • For quantitative evaluations it is possible to select among the 7 most common evaluation designs (noting the trade-offs when using a simpler design). • For qualitative evaluations the options will vary depending on the type of design.

  22. Choosing the Best Design from the Available Options • Longitudinal Quasi-experimental • Quasi-experimental (pre+post, with ‘control’) • Truncated Longitudinal • Pre+post of project; post-only comparison • Pre+post of project; no comparison • Post-test only of project and comparison • Post-test only of project participants

  23. Choosing the Best Design from the Available Options • Depending upon the design, some of the options might include: • Reducing the number of units studied (communities, families, schools) • Reducing the number of case studies or the duration and complexity of the cases. • Reducing the duration or frequency of observations.

  24. Kinds of Analysis and Comparisons Critical to the Evaluation • It is useful to think of three kinds of evaluation: • Exploratory or research evaluations • Small-scale quasi-experimental or Qualitative designs • Large-scale impact assessment

  25. Kinds of Analysis and Comparisons Critical to the Evaluation • Exploratory or research evaluations • The purpose is to assess whether the basic project concept and approach “works.” • This is often used when a new type of service is being piloted or when an existing service is to be provided in a new way or to reach new target groups.

  26. Kinds of Analysis and Comparisons Critical to the Evaluation • Exploratory or research evaluations • Examples of the key evaluation questions include the following: • Do the new teaching methods get a positive response from the schools, students, and parents and is there initial evidence of improved performance? • If poor women are given loans, are they able to use the money to start or expand a small business?

  27. Kinds of Analysis and Comparisons Critical to the Evaluation • Small-scale quasi-experimental or Qualitative designs • Assess whether there is evidence that the project is producing significant effects on the target population. • Some designs include a comparison group, whereas others use a more general comparison with similar communities through focus groups.

  28. Kinds of Analysis and Comparisons Critical to the Evaluation • Small-scale quasi-experimental or Qualitative designs • Questions of attribution are addressed but in a less rigorous way than for large-scale impact assessments • What would have been the condition of the project group if the project had not taken place?

  29. Kinds of Analysis and Comparisons Critical to the Evaluation • Small-scale quasi-experimental or Qualitative designs • Some of the critical questions might include the following: • Are the intended project beneficiaries (e.g. individuals, families, schools, communities) better off as a result of the project?

  30. Kinds of Analysis and Comparisons Critical to the Evaluation • Small-scale quasi-experimental or Qualitative designs • Some of the critical questions might include the following: • How confident are we that the observed changes were caused by the project and not be external factors such as improvements in the local economy? • Would the project be likely to have similar effects in other areas if it were replicated? Where would it work well and less well, and why?

  31. Kinds of Analysis and Comparisons Critical to the Evaluation • Small-scale quasi-experimental or Qualitative designs • Some of the critical questions might include the following (cont): • Which contextual factors (economic, political, institutional, environmental, and cultural) affect success? (See Chapter 9 for a discussion of contextual factors and mediator variables). • Who did and who did not benefit from the project?

  32. Kinds of Analysis and Comparisons Critical to the Evaluation • Large-scale impact assessment • The purpose is to.. • Assess, with greater statistical rigor, how large an effect (defined numerically in terms of percentage or quantitative change) has been produced, and who does and does not benefit. • Ideally, the evaluation should use a mixed-method approach integrating Quantitative and Qualitative methods.

  33. Kinds of Analysis and Comparisons Critical to the Evaluation • Large-scale impact assessment • Critical questions might include the following: • What impacts (high-level sustainable effects) has the project produced? The emphasis is on “how much” and not just “what.” • What is the quality of the services (compared with other programs, to expected standards, and in the opinion of the target groups)?

  34. Kinds of Analysis and Comparisons Critical to the Evaluation • Large-scale impact assessment • Critical questions might include the following (cont.): • Are the project effects statistically significant “beyond a reasonable doubt”? • Who has benefited most and least, and are there any groups that have not benefited at all or who are worse off? • What are the intervening variables (e.g., socioeconomic characteristics of the project groups, cultural factors affecting participation) that influence the magnitude of impacts?

  35. Evaluation Scenarios • Typical scenarios: • No (or at least inadequate) baseline or comparison group data has been collected. • Program developer was not considering evaluation at the planning and implementation stage • Time pressures and budget constraints • Challenging expectations of stakeholders • Clients have prior expectations for what the evaluation findings will say. • Many stakeholders do not understand evaluation; distrust the process; or even see it as a threat (dislike of being judged).

  36. Evaluation Scenarios • An adequate budget but lack of data • No comparable baseline data and/or inability to include comparison group in evaluation design. • A limited budget but plenty of time • National evaluation teams may not have the resources to bring in foreign expertise or to conduct large scale sample surveys – but they may have plenty of time to use qualitative methods and small-scale longitudinal studies. • An adequate budget but limited time • This is often the situation when external evaluators are contracted to work under tight deadlines and with limited time in the field.

  37. Evaluation Scenarios • Evaluator(s) not brought in until near end of project- • For political, technical or budget reasons: • There was no baseline survey • Project implementers did not collect adequate data on project participants at the beginning or during the life of the project. • It is difficult to collect data on comparable control groups

  38. Evaluation Scenarios • The evaluation team is called in early in the life of the project- • For budget, political or methodological reasons. • The ‘baseline’ was a needs assessment, not comparable to eventual evaluation. • It is not possible to collect baseline data on a comparison group.

  39. Overcoming Eval Challenges • Quality Control Goals • Achieve maximum possible evaluation rigor within the limitations of a given context. • Identify and control for methodological weaknesses in the evaluation design.

  40. Overcoming Eval Challenges • Quality Control Goals • Negotiate with clients trade-offs between desired rigor and available resources. • Presentation of findings must recognize methodological weaknesses and how they affect generalization to broader populations.

  41. Overcoming Eval Challenges • Addressing Budget Constraints • Clarifying client information needs • Simplifying the evaluation design • Look for reliable secondary data • Review sample size • Reducing costs of data collection and analysis

  42. Overcoming Eval Challenges • Understanding Client Information Needs • Typical questions clients want answered: • Is the project achieving its objectives? • Are all sectors of the target population benefiting? • Are the results sustainable? • Which contextual factors determine the degree of success or failure?

  43. Overcoming Eval Challenges • Understanding Client Information Needs • A full understanding of client information needs can often reduce the types of information collected and the level of detail and rigor necessary. • However, this understanding could also increase the amount of information required!

  44. Overcoming Eval Challenges • Simplify Evaluation Design • Rationalize Data Needs • Review all data collection instruments and cut out any questions not directly related to the objectives of the evaluation.

  45. Overcoming Eval Challenges • Look for Reliable Secondary Sources • Planning studies • Project administrative records • Government departments • Other NGOs • Universities / research institutes • Mass media

  46. Overcoming Eval Challenges • Look for Reliable Secondary Sources • Assess the relevance and reliability of sources for the evaluation with respect to: • Coverage of the target population • Time period • Relevance of the information collected • Reliability and completeness of the data • Potential biases

  47. Overcoming Eval Challenges • Seek Ways to Reduce Sample Size • Accepting a lower level of precision significantly reduces the required number of interviews: • To test for a 5% change in proportions requires a maximum sample of 1086 • To test for a 10% change in proportions requires a maximum sample of up to 270

  48. Overcoming Eval Challenges • Seek Ways to Reduce Sample Size • Accept a lower level of statistical precision • Use a one-tailed statistical test • Reduce the number of levels of disaggregation of the analysis

  49. Overcoming Eval Challenges • Reducing Costs of Data Collection and Analysis • Use self-administered questionnaires • Reduce length and complexity of instrument • Use direct observation

  50. Overcoming Eval Challenges • Reducing Costs of Data Collection and Analysis • Obtain estimates from focus groups and community forums • Key informants • Participatory assessment methods • Multi-methods and triangulation

More Related