1 / 20

Evaluability Assessment, Formative & Summative Evaluation

Evaluability Assessment, Formative & Summative Evaluation. Laura C. Leviton, Ph.D. Senior Advisor for Evaluation. Definitions. Summative evaluation: Designing and using evaluation to judge merit Formative evaluation: Designing and using evaluation to improve intervention

pauljames
Download Presentation

Evaluability Assessment, Formative & Summative Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluability Assessment, Formative & Summative Evaluation Laura C. Leviton, Ph.D. Senior Advisor for Evaluation

  2. Definitions • Summative evaluation: • Designing and using evaluation to judge merit • Formative evaluation: • Designing and using evaluation to improve intervention • Evaluability assessment: Assessing whether • the intervention is ready to be managed for results • what changes are needed to do so • whether evaluation would contribute to improved performance

  3. Resources • Wholey, Hatry & Newcomer, Handbook of Practical Program Evaluation, Wiley 2010 • Leviton et al., Evaluability assessment to improve public health. In Annual Review of Public Health, Volume 31:213-234. • Leviton, Kettel Khan & Dawkins, New Directions in Evaluation, No. 125, January 2010 Chapter 3 has templates, procedures

  4. Evaluations are Often Handed to Us “Here, evaluate this.”

  5. Some of the Interventions Just Aren’t Very Good “Here, evaluate this.”

  6. Why? • Perhaps 95% of interventions are not fully developed at the time of evaluation. • Also, ongoing problems with measurement, design and analysis. • Wilson and Lipsey, 2001 review of 319 meta-analyses: • Proportion of effect sizes associated with study features • Variance accounted for: Study methods = features of the intervention Biggest sources: research design, operationalizing the dependent variable, sampling error

  7. Don’t Rush to Summative Evaluation • It’s just not cost-effective. • Good outcome studies are the culmination of careful work • Start with formative evaluation, or better yet, • Evaluability assessment.

  8. Steps in Evaluability Assessment • Involve intended users of evaluation information • Clarify the intended intervention • Explore intervention reality • Reach agreement on needed changes in activities or goals • Explore alternative evaluation designs • Agree on evaluation priorities and intended uses of information.

  9. Too Linear – It Works Like This: It’s Cyclic, Not Linear

  10. Early Steps

  11. Middle Steps

  12. Logic Models

  13. Theory of Change

  14. Later Steps

  15. What Now? • Intervention development • Data collection to inform improvements • When to do summative (outcome) studies: • Logic model or TOC is sharpened and agreed to • The model looks like the reality, and vice versa • It’s plausible to achieve the outcome(s) • Formative evaluation indicates intermediate steps are being accomplished.

  16. Measurement Nick Barber 15th February 2013

  17. Evaluation of Technologies Nick Barber 15th February 2013

  18. Presentation title set in header When do we measure the effectiveness of this system?

  19. Presentation title set in header Structured approach to IT evaluation (after Cornford)

  20. Evaluation of Technologies Nick Barber 15th February 2013

More Related