1 / 28

Evaluability Assessments: Achieving Better Evaluations, Building Stronger Programs

Evaluability Assessments: Achieving Better Evaluations, Building Stronger Programs. Nicola Dawkins, PhD, MPH ICF Macro. Robert Wood Johnson Foundation Laura Leviton, PhD Centers for Disease Control and Prevention DNPAO - Laura Kettel Khan, PhD

hertz
Download Presentation

Evaluability Assessments: Achieving Better Evaluations, Building Stronger Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluability Assessments:Achieving Better Evaluations, Building Stronger Programs Nicola Dawkins, PhD, MPH ICF Macro

  2. Robert Wood Johnson Foundation Laura Leviton, PhD Centers for Disease Control and Prevention DNPAO - Laura Kettel Khan, PhD DASH – Leah Robin, PhD and Seraphine Pitt Barnes, PhD, MPH, CHES DACH/PRC – Jo Anne Grunbaum, EdD Centers for Disease Control and Prevention Foundation Danielle Jackson, MPH, John Moore, PhD, RN, and Holly Wethington, PhD Macro International Inc. David Cotton, PhD, MPH, Nicola Dawkins, PhD, MPH, Karen Cheung, MPH, Mary Ann Hall, MPH, Thearis Osuji, MPH, and Starr Rice, BA Project Team The findings and conclusions presented are those of the authors and do not necessarily represent the official position of the agencies.

  3. Will Discuss Today • Introduction to EA, compare with full evaluation • Purpose of Early Assessment project • One unique process for using multiple EA method • Steps in project • Results • Insights and conclusions

  4. Evaluability Assessment • Assesses: • Underlying program logic • Current state of program implementation • Feasibility of conducting rigorous outcomes-focused evaluation or other sorts of evaluation

  5. Is intervention promising? Yes No Does intervention have program design integrity and realistic, achievable goals? No Yes Is intervention implemented as intended and at an appropriate developmental level? No Yes To answer questions: (1) Is there a feasible design? (2) Are data available or feasible to collect? No Assist in improvement of program design, implementation, and evaluation characteristics Yes Evaluable Intervention

  6. CDC Framework for Program Evaluation Engage stakeholders Ensure use and Describe the share lessons program learned Steps Focus the Justify evaluation conclusions design Gather credible evidence

  7. Evaluability Steps Compared to CDC’s Evaluation Framework CDC Framework Evaluability Steps • Engage stakeholders • Involve stakeholders and intended users • Describe the program • Clarify program intent • Determine program implementation • Focus the evaluation • design • Work with stakeholders to prioritize key evaluation questions • Gather credible evidence • Explore designs and measurements • Justify conclusions • Ensure use and share lessons learned • Agree on intended uses

  8. Multiple EA Example • Convene a panel of experts to identify and review potential environmental programs and policies • Assess environmental programs and policies’ readiness for evaluation • Synthesize findings and share promising practices with the field • Develop a network of public health and evaluation professionals with the skills to conduct evaluability assessments

  9. Unique Systematic Screening and Assessment (SSA) Method Steps Inputs Products 1. CHOOSE priorities Focus Guidance 2. SCAN environmental interventions Nominations, existing inventories, descriptions Brief descriptions 3. REVIEW AND IDENTIFY INTERVENTIONS that warrant evaluability assessment List of interventions Expert review panel Distributed network of practitioners/researchers 4. EVALUABILITY ASSESSMENTS of priority interventions Report on each intervention 5. REVIEW AND RATE interventions for promise/ readiness for evaluation Expert review panel Ratings and reports Communicate with all stakeholders • Constructive feedback • Plan for rigorous evaluation 6. USE information Report of intervention and evaluation issues 7. SYNTHESIZE what is known

  10. Systematic Process

  11. Systematic Process Cont’d • Expert panel selected 26 using criteria: • Potential impact • Innovativeness • Reach • Acceptability to stakeholders • Feasibility of implementation • Feasibility of adoption • Sustainability • Generalizability/transportability • Staff/organization capacity for evaluation

  12. Selected Programs and Policies (Year 1) • 7 After School/3 Daycare Programs • 5 programs: PA time, nutritious snacks • 4 programs: PA time, nutrition education • 1 policy: PA, nutrition, TV screen time • 10 Food Access Programs • 5 farmers’ markets • 3 supermarket or corner store programs • 2 restaurant programs • 6 School District Local Wellness Policies • All selected addressed PA and nutrition

  13. Evaluability Assessment • Review of documents • Draft logic model • 2-3 day site visit • Interviews: program description, logic model, staffing, funding, sustainability, evaluation activities • Observations • TA /debriefing session • Reports and recommendations • Follow-up TA call with CDC experts

  14. Readiness for Evaluation • Review of site visit reports identified classifications: • Ready for stand-alone, outcome evaluation • Appropriate for cluster evaluation • Theoretically sound but need further development • Technical assistance needed in specific areas

  15. Results for Year 1 • Expert panel determined: • 14 ready for stand-alone, outcome evaluation • 2 best suited for cluster evaluation • 3 theoretically sound but need further development • 6 need TA in specific areas

  16. Results for Year 1, Cont’d • Dissemination of results from Year 1 • Full evaluation planned for New York City Daycare Policy

  17. Discovering Practice Based Evidence • SSA Method builds evidence base through practice based evidence Year 1: 282 nominations 26 EAs 9 high potential impact, ready for evaluation

  18. Year 2 • Year 2 completed EAs of 27 initiatives

  19. Discovering Practice Based Evidence • SSA Method builds evidence base through practice based evidence Year 2: 176 nominations 27 EAs 11 high potential impact, ready for evaluation

  20. Key Lessons Learned • Use an expert panel for diverse perspective • Solicit broadly to maximize return • Include programs/policies beyond start up phase to ensure implementation • Centralize oversight for methodological integrity • Provide technical assistance as an incentive to sites

  21. Recap: It’s a Process • 1. Choosepriorities for the scan • 2. Scan environmental programs & policies • 3. Review and identify those that warrant evaluability assessment • 4. Evaluability assessmentof programs & policies • 5. Review and rate for promise and readiness for evaluation • Use Information: • Position for rigorous evaluation • Feedback to innovators • Cross-site synthesis

  22. Overview of General EA vs SSA Method • What is the same? • Review documents • Discuss with stakeholders • Develop logic model • Iterate the process • Determine what can be evaluated • What is different? • EA as one component of a process of discovery • SSA Method explicitly provides feedback to innovators • SSA Method provided insights on clusters of projects • SSA Method helped identify policies and programs worthy of further attention

  23. The Cost-Savings Factor • Of 458 innovations nominated in both years: • 174 met criteria for inclusion; • 53 were selected for evaluability assessments; • 20 were of high potential impact and ready for stand alone evaluation. • Yet all of the nominations were viewed as important by stakeholders. • If all of them underwent evaluation, • would be a 4% chance of encountering something with likelihood of concluding success!

  24. Conclusion 1 Without a systematic process, one would need to conduct at least 20 evaluations to discover 1 that might be successful. The process is cost-effective for funders and decision makers. It reduces uncertainty about evaluation investments.

  25. Conclusion 2 • Innovators found the process very helpful. • Evaluability assessment plays a program development role.

  26. Conclusion 3 • Themes and issues emerged for clusters of policies and programs. • Evaluability assessments can be configured to cast new light on • developments in the field • families or clusters of policies and programs

  27. Impact on the Field of Prevention • “Translating practice into evidence” • A new method of topic selection and program identification • Researchers very engaged by learning about practice • Stimulated discussion of new research agendas

  28. Nicola DawkinsNDawkins@ICFI.com

More Related