1 / 28

Public Health and Evaluation Applications

Public Health and Evaluation Applications. Module 10. Overview. Surveillance of Public Health Outcomes. Focusing an Evaluation of System Change Evaluation Tools for Systems Change. Quality Indicators and SPC Public Health Applications.

acarin
Download Presentation

Public Health and Evaluation Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Public Health and Evaluation Applications Module 10

  2. Overview • Surveillance of Public Health Outcomes. • Focusing an Evaluation of System Change • Evaluation Tools for Systems Change

  3. Quality Indicators and SPCPublic Health Applications • Improving Health in the Community: A Role for Performance Monitoring • IOM 1997 • Lead states in public health quality improvement: A Multistate Learning Collaborative. Robert Wood Johnson Foundation. • Extensive use of SPC in Patient Safety.

  4. Quality Indicators and SPCPublic Health Applications Hospital Epidemiology is the classical place for SPC in epidemiology.

  5. Other Areas with Wide Epidemiological Application • Infectious Disease (outbreak detection) • Syndromic Surveillance • Detection of Bioterrorism

  6. Environmental health and injury prevention can make powerful use of this material.

  7. How to focus an evaluation • Describe purpose • Identify users of evaluation results • Identify questions the evaluation should answer. • Describe methods to answer these questions. • Create agreements on who will do what to evaluate the initiative.

  8. Think-Pair-ShareEvaluating Improvement Initiatives • You have been asked to lead an evaluation team for a national improvement collaborative. • 10 clinics have agreed to empanel their patients to care teams, experiment with small tests of change, and meet together every month to share lessons learned.

  9. Think-Pair-ShareEvaluating Improvement Initiatives • List 3 questions you would want the evaluation to answer. • How would you answer these three questions?

  10. Evaluation Questions for Multi-site Systems Change Interventions • Naïve—Does it work? • Less Naïve—Can it work? • Ideal—What works for whom under what circumstances?

  11. Consequences • Non-linear response • Cannot isolate the effects of the intervention • Depends too much on the changing context in which the intervention occurred

  12. The classic evaluation design

  13. How is the control group selected in the classic design?

  14. Does this design answer the question:What works for whom under what circumstances?

  15. The realist evaluation design

  16. Richer evaluation design *Qualitative **Quantitative ***Mixed

  17. Even richer evaluation design *Qualitative **Quantitative ***Mixed

  18. Tools for richer and even richer designs • Meta-analysis of before-after or beginning-ending results. • “Dose-Response” • Sensitivity analysis of forest chart. • Regression of outcome on degree of implementation. • Mixed-Method Evaluation • SPC set up for evaluation. • Time Series Analysis

  19. Meta-analysis of studies

  20. Forest Chart of IPC Sites

  21. Assessing “Dose-Response” and circumstances that predict success with sensitivity analysis • Plot the forest chart again. • Retain only sites with greater than 50 percent of patients empanelled (or any other variable of interest). • Compare results with original forest chart. • Look at the effect restricting analysis had on the “All Sites” observation.

  22. Assessing “dose-response” with regression Y = α + βx Where: Y = Difference between ending and beginning rates. α = Intercept x = percent of patients empanelled. Β = regression coefficient for x

  23. Assessing “dose-response” with regression Y = α + β1x1 + β2x2 Where: Y = Difference between ending and beginning rates. α = Intercept x1 = percent of patients empanelled. β1 = regression coefficient for x1 x2 = self management support (1=yes, 0=no). β2 = regression coefficient for x2

  24. Mixed Method Evaluation Qualitative (Stories) Quantitative (Statistics)

  25. Mixed Method Evaluation • Quantitative • Assess change before/after. • Compare to control group. • Compare outcome across multiple intervention sites. • Test hypotheses generated by qualitative. • Qualitative (in-depth interviews, groups) • Rich description of implementation • Rich description of context (barriers, facilitators). • Compare high performers with low performers on qualitative results. • Generate hypotheses for differences in performance.

  26. Using control limits from pre-intervention baseline.

  27. ConcludingQuote from Dr. Don Berwick “Many assessment techniques developed in engineering and used in quality improvement—statistical process control, time series analysis, simulations, and factorial experiments—have more power to inform about mechanisms and contexts than do RCTs, as do ethnography, anthropology, and other qualitative methods. For these specific applications, these methods are not compromises in learning how to improve; they are superior.” Donald Berwick JAMA. 2008;299(10):1182-1184

More Related