1 / 37

Evaluation Basics

Evaluation Basics. Anita Singh, PhD Family Programs Evaluation Branch Chief Office of Research and Analysis Food and Nutrition Service, USDA. Why Evaluate?. To obtain ongoing, systematic information about a project Project management (includes project refinement and planning)

esmeralda
Download Presentation

Evaluation Basics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation Basics Anita Singh, PhD Family Programs Evaluation Branch Chief Office of Research and Analysis Food and Nutrition Service, USDA

  2. Why Evaluate? • To obtain ongoing, systematic information about a project • Project management (includes project refinement and planning) • Project efficiency • Project accountability

  3. Types of Evaluation • Formative • Process • Outcome • Impact

  4. Formative Research • Typically occurs when an intervention is being developed. • Results used in designing intervention • Results are informative – not definitive • Examples – focus groups, literature review etc.

  5. Process Evaluation • Tracking the actual implementation (e.g. delivery, resources) • Used to determine if intervention was delivered as designed • Helps identify barriers to implementation and strategies to overcome barriers

  6. Outcome Evaluation • Addresses whether anticipated changes occurred in conjunction with the intervention • Example: Pre- Post intervention test of nutrition knowledge • Indicates the degree of change but it is not conclusive evidence

  7. Impact Evaluation • Allows one to conclude authoritatively that the observed outcomes are due to the intervention • Can draw cause and effect conclusions by isolating the intervention from other factors that might contribute to the outcome.

  8. Planning for an Impact Evaluation IS THE INTERVENTION EVALUABLE? • What are the objectives? • What is the expected size of the impact? • Why, how and when is the intervention expected to achieve the objectives? • Will the intervention be implemented as intended?

  9. Planning for an Impact Evaluation • Build on available research • Consider study design – use of “experimental design” versus “quasi-experimental design” – cost and resource considerations

  10. Design considerations • Experimental – Strongest type of design -- cause and effect, uses random assignment; cost considerations • Quasi-experimental designs – does not use random assignment; can have a control group – may include multiple groups and or/multiple waves of data collection Other: Program Evaluations – observational studies/ surveillance data

  11. Planning for an Impact Evaluation • Use “SMART” objectives • Choose measures that fit the intervention • Protection of human subjects

  12. As the Intervention Begins • Collect impact data at start-up before intervention has reached steady state; Follow-up (interest/resources) After the intervention Report the findings Use the findings

  13. What is Social Marketing? It is a: • Systematic and strategic planning process. • Social or behavior change strategy. • Total package of strategies carefully chosen based on characteristics of the target audience. • Uses strategies from commercial marketing.

  14. Social Marketing is Not: • Just advertising or communication • A media campaign • Reaching everyone • A fast process • A theory

  15. Basic Principles of Social Marketing • Behavior change • Audience orientation • Audience segmentation • Exchange • Competition • Marketing mix (4 P’s)

  16. Why Evaluate Social Marketing? • Support continuing improvement • Establish effect and inform program accountability

  17. Challenges for Evaluation • Assess extent of exposure • Measure intermediate outcomes • Less intense – design and measurement tools need to be sensitive to small changes in behavior • Timely feedback to inform continual improvements Source: Hersey et al, 1999

  18. Steps for Program Evaluation • Engage stakeholders • Describe the Program (e.g. develop a Logic Model; develop a conceptual framework) • The next 2 slides present an example of developing a logic model (Source: UW Extension)

  19. Simplest form of logic model INPUTS OUTPUTS OUTCOMES Source: University of Wisconsin-Extension, Program Development and Evaluation

  20. A bit more detail INPUTS OUTPUTS OUTCOMES Activities Participation Short Medium Long-term Program investments What we invest What we do Who we reach What results SO WHAT?? What is the VALUE? Source: University of Wisconsin-Extension, Program Development and Evaluation

  21. Steps for Program Evaluation • Identify evaluation questions • Formative testing • Develop data collection and analysis plan – select appropriate measures • Analyze and interpret the data • Use and disseminate the findings

  22. Social Marketing Planning Process • A structured approach to developing and implementing a program or intervention for voluntary behavior change. • Six Phases 1. Problem description.2. Formative research.3. Strategy development.4. Intervention design. 5. Evaluation.6. Implementation. (Source: CDC’s Social Marketing for Nutrition and Physical Activity Online Course)

  23. Evaluating the Impact of Social Marketing • Challenges in evaluating behavioral impact • Incorrectly ascribing impact – contamination issues • Use of inappropriate measures of change e.g. focusing on general long-term changes versus intermediate changes and specific behaviors

  24. Measurement Selection Includes • Knowing the information needs • Understanding Campaign rationale • Basing on Theoretical model for behavior change • Selecting approach e.g. mail survey, phone, in-person interview, records – reliability, response rate, cost • Selecting Measurement tools – validity and reliability of instruments

  25. Study Design • Use of Comparison sites • Eliminates alternative explanations that could otherwise account for observed results • Internal validity

  26. Study Design • Timing of data collection • Pre or baseline • After implementation • Post Campaign

  27. Study Design – sample size • Statistical Power –based on amount of change that could be expected • Once desired magnitude of change has been established, then select/calculate sample size with statistical power to determine if the change is due to the intervention and not random chance (see Hersey et. al.)

  28. Sample Size Determination – Which one would require a larger sample?

  29. Sample Size depends on: • Difference that is expected to be detected • Measurement tool • Study design – cross-sectional versus longitudinal study

  30. Other Considerations • Response rate – higher the response rate, the greater the likelihood that the sample is representative of the study population. • Example survey – 30 percent completed versus 80 percent completed.

  31. Other Considerations • Low response rate – deal with issues such as “intention to treat.” • Intention to treat analyses are done to avoid the effects of crossover and drop-out, which may break the randomization to the treatment groups in a study. • Intention to treat analysis provides information about the potential effects of treatment policy rather than on the potential effects of specific treatment.

  32. Other considerations • Selection bias: the sample is not truly representative of the study population • Repeated interviews • Sample attrition • If high attrition rate - comparing pre/baseline scores of non-dropout with dropouts. • May need to adjust for difference

  33. Other Considerations • Seasonal effects – Fresh fruits and vegetable consumption

  34. Summary • Evaluation can provide valuable, ongoing systematic information about a project • Common evaluation features across delivery types • Choice of features and evaluation type(s) will be driven by your information needs

  35. Social Marketing Resources CDC on line course • http://www.cdc.gov/nccdphp/dnpa/socialmarketing/training/basics/index.htm (basics) • http://www.cdc.gov/nccdphp/dnpa/socialmarketing/training/phase5/index.htm (evaluation) Evaluating Social Marketing in Nutrition: A Resource Manual by Hersey et. al. http://www.fns.usda.gov/oane/MENU/Published/nutritioneducation/Files/evalman-2.PDF

  36. Evaluation Online Resources • Nutrition Education: Principles of Sound Impact Evaluation, FNS, Sept. 05 http://www.fns.usda.gov/oane/menu/Published/NutritionEducation/Files/EvaluationPrinciples.pdf • Building capacity in Evaluating Outcomes –UW Extension, Oct 08 http://www.uwex.edu/ces/pdande/evaluation/bceo/index.html

  37. Resources continued • WK Kellogg Foundation Evaluation Handbook, Jan 98 http://www.wkkf.org/default.aspx?tabid=75&CID=281&NID=61&LanguageID=0 • Developing a logic model: Teaching and training guide: E. Taylor-Powell and E. Henert; UW Extension Feb 08 http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

More Related