1 / 50

Performance and outcomes measurement

Performance and outcomes measurement. Andrew Auerbach MD MPH Associate Professor of Medicine UCSF Division of Hospital Medicine ada@medicine.ucsf.edu. Introduction. Housekeeping Course overview Foundational concepts in performance and outcome assessment Structural and management measures

paul2
Download Presentation

Performance and outcomes measurement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance and outcomes measurement Andrew Auerbach MD MPH Associate Professor of Medicine UCSF Division of Hospital Medicine ada@medicine.ucsf.edu

  2. Introduction • Housekeeping • Course overview • Foundational concepts in performance and outcome assessment • Structural and management measures • Process measures • Outcome measurement

  3. Introduction • Ways in which outcomes and performance measurements are used: • Research: • ‘Outcomes research’ and ‘health services research’ • Comparative effectiveness research • Implementation research • Operational/Policy • Quality improvement • Public reporting

  4. Housekeeping • Andy Auerbach • ada@medicine.ucsf.edu • 415-502-1412 (office) • 415-443-6670 (pager) • Ashok Krishnaswami • outcomeskrish@gmail.com • Homework due before class. Leave at Olivia’s desk.

  5. Housekeeping • Each session: • Will have 1-2 readings • A homework (not today) • Grades • Based on homework + final exam + participation in final class (quality debate)

  6. Curriculum

  7. The method (model) behind the madness Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44(3):Suppl:166-206.

  8. More detailed model • Donabedian A. JAMA 1988;260:1743-8 Structure Process Outcomes Community Characteristics Health Status Health Care Providers - Technical processes - Care processes - Interpersonal processes Functional Status Delivery System Characteristics Satisfaction Public & Patients - Access - Equity - Adherence Provider Characteristics Mortality Cost Population Characteristics

  9. Because in the end your analyses will look like this…. Measure = Predictor + confounders + error term

  10. Evaluative models can be divided similarly • Structurally-focused models: • Research: Compare different systems of care in terms of clinical outcomes • AKA - Health services research • Quality improvement/policy: Incent specific characteristics • Eg. Leapfrog measures

  11. Evaluative models can be divided similarly • Process-focused models: • Research: Compare treatments in terms of outcomes • Adherence research, variations in care delivery, educational research • Use of ‘Surrogate outcomes’ with proven or a priori connection to outcomes • Comparative effectiveness research • Quality improvement/policy: Hospitalcompare.org

  12. Evaluative models can be divided similarly • Outcomes focused models • Research: Dartmouth atlas • Policy/QI: National Surgical Quality Improvement Program

  13. What are some key features of outcomes/performance research? • What sorts of data are used? • What sorts of study designs does it encompass? • How do we differentiate it from cost-effectiveness?

  14. Features of outcomes/performance research • Generally not experimental in nature • Generally uses preexisting data sets • Prone to numerous kinds of confounders and sorts of biases • ‘Effectiveness’, not causality

  15. Outcome variations based on structure • Beds • Availability of testing on site • Teaching hospital • Volume of cases seen • Closed ICU • Number of primary care providers • Proximity to hospital or ED

  16. Other structural factors • Four important domains in management • Strategic • Cultural • Technical • Structural Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624, 510.

  17. Management factors required for quality improvement • Strategic : • Is there understanding about the processes and conditions that are most important to the organization? • Cultural • Beliefs, values, norms, and behaviors in the organization which inhibit/support QI work • Increasing interest in ‘accountability’, in context of individual report cards, incentives

  18. Management factors required for quality improvement • Technical • Training : Do people have skills necessary to carry out QI efforts? • Information support: Do people have the information required to manage quality? • Structural • Are mechanisms which facilitate learning/dissemination of ‘learning’ available

  19. DDx of quality problems Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624, 510.

  20. Process of care-focused research

  21. Structure of care initiatives • Leapfrog group • Purchaser-catalyzed organization founded 1998 • Collected and reported hospital data in 2001 • Encourages change through public reports and purchasers • 4 Leaps • Computer Physician Order Entry (CPOE): • Evidence-Based Hospital Referral (EHR): • ICU Physician Staffing (IPS): Staffing ICUs with doctors who have special training in critical care medicine • Leapfrog Safe Practices Score: based on NQF measures

  22. 2008 Leapfrog survey • Leap 1: Computerized Order entry • Leap 2: Evidence-based referral volumes • Report – volumes of cases • Some estimates of case-mix • Geometric mean length of stay • Mortality (if publicly reported)

  23. Leapfrog 2008

  24. Leapfrog 2008

  25. Leapfrog 2001 and 2008 • Transition from primarily structural measures to more process focus – Why? • Some measures hard to implement due to resource limitations • Closed ICU – national intensivist shortage • CPOE • Some goals hard to achieve because of secular trends In CABG, national rates of the procedure are falling Effects of volume are being subsumed by other national initiatives

  26. Leapfrog and outcomes • Full adoption of Leapfrog measures associated with lower mortality in AMI • Independent hospitals less likely to have full adoption Jha A, JtComm J Qual Patient Saf. 2008 Jun;34(6):318-25. Ford, Health Care Manage Rev. 2008 Jan-Mar;33(1):13-20

  27. Big problem:How do you measure structure? • Site visits • Voluntary surveys • Of which people? Patients? Physicians on the front line? Executives? Mystery shoppers?

  28. Process-focused performance measurement • General criteria for an optimal process measure • Must target a common condition • Measureable • Definable ‘optimal’ patient population • Requires evidence linking process to improved outcome • Advantages: • Generally, processes are more common than outcomes  statistical advantages • Provide a clear performance target for clinicians • Maybe: No risk adjustment once optimal patients defined

  29. Process-focused performance measurement • Disadvantages • Medications, devices generally preferred process targets • Documented practices or interpersonal/contextual factors less so • ‘Tyranny of the RCT’ • Focus on only a few diseases, patient populations

  30. Process focused performance comparisons – the major caveat • Biases • In observational data, people who get a treatment are fundamentally different than those who don’t

  31. Process of care initiatives • Surgical Care Improvement Project • Grew out of the Oklahoma Medicare PRO in the late 90’s, focused initially on surgical infection prevention. • Voluntary public reporting 2002-3 • Medicare 2% withold for non-participation in 2004

  32. SCIP • Measures used in surgery • Appropriate drug choice • Antibiotics within 1 hour of incision • Discontinuation in 24 hours • VTE prevention practices • Appropriate orders written • Appropriate therapy received in timely fashion • In cardiac surgery: Glucose < 200 at 6a on first postop day

  33. Shortcomings of process measures • Cautionary tale about SCIP • Few data, as yet, to prove that adherence to individual measures is associated with improved outcomes in any surgery • Growing thinking that even process measures need risk adjustment

  34. Measuring processes • Where do you collect process measures from?

  35. Shortcomings of process measures • Ceiling effects • What do we do when everyone is at 100%? • Weighting schemes for measures thought to be more important • Incent continued excellence (e.g. number of months at 100%) • ‘All or none’ measurement • Only ‘adherent’ if all quality measures met

  36. Should quality be an all or none proposition? Auerbach A, Ann Intern Med.2009;151.

  37. Public reporting : Does it improve care? • 45 papers reviewed, 27 since 1999 • Few (or no) studies examining impact of reporting on patient or provider level • Receipt of care processes • Outcomes • When data are reported, moderate effect at system (care plan, hospital level) • In case of NY CABG reporting – shift towards fewer high-risk cases being performed • No data about how public reporting influences the quality structure per se Ann Intern Med.2008;148:111-123.

  38. Does paying for performance improve care? • Natural experiment of 255 hospitals in P4P matched to 406 control hospitals • Hospitals in the top decile on a composite measure of quality for a given year received a 2% bonus in addition to the usual reimbursement rate. • Hospitals in the second decile received a 1% bonus. • Bonuses averaged $71,960 per year and ranged from $914 to $847,227. N Engl J Med 2007;356:486-96.

  39. Between 4.1 and 2.6% improvement in quality of care with P4P. But everyone improved

  40. Do process measures need risk adjustment? • Accounting for factors outside the site’s (or provider’s) control may be important • Proportion of patients whose first HGBA1C is elevated • Proportion of patients with morbid obesity • This is somewhat controversial • Important caveat: Risk adjustment should not adjust away gaps related to disparities in care (e.g. gender, age, race)

  41. Outcomes • Most performance measurement will include a clinical outcome as the dependent variable • What are the major issues when you start comparing peoples’ (or systems’) performance in terms of their patients’ mortality, satisfaction, etc?

  42. Outcomes focused initiatives • Northern New England Cardiovascular Cooperative • Society of Thoracic Surgeons • Vermont Oxford Neonatal Networks • ACC PCI registries • VA NSQIP • Private Sector NSQIP

  43. Common elements • Collection of clinical risk adjustment and outcomes data via chart abstraction • NSQIP – via nurses • Robust risk adjustment and benchmarking • Some collaborative function via central steering group and regional or site directors • Implicit or explicit agreements to undertake site visits or audits of outliers

  44. VA NSQIP • Marked reduction in mortality for commonly performed procedures in the VA • Overall mortality fell from 3% in 1995 to 1% in 2005 • Consistent themes of high performers: • Protocols, pathways, and guidelines with emphasis on standardization • Interdisciplinary focus on systems improvement

  45. VA NSQIP • Not just data: • Annual reports from all participants, attendance in all group activities • PRN reports from sites in upper 20% of mortality (worse mortality) • Mandatory site visits to worst performers (upper 5%) • Aggressive efforts to identify poor performing physicians

  46. Caveats of outcomes models • Risk adjustment mandatory • Costly to collect data thought to be ‘better’ • NSQIP (private sector) – about $40-50/case • $50,000/year subscription fee, plus costs of data collection nurse (1-2FTE) = 150-250K/year investment • Low event rates limit power • 2005 data suggest that <60% of hospitals did enough CABG to provide good sample size to compare mortality

  47. Advantages • High face validity • Truly the ‘end product’ • Culturally concordant and most impactful • Often contain specialty-specific complications • Tailored to needs of participating clinicians further maximizes buy-in

  48. Conclusions • Performance measurement can include structures, processes, or outcomes as comparators • Structural factors have strong connections to management theory, sociology, industrial design/IT

  49. Conclusions • Performance measurement can include structures, processes, or outcomes as comparators • Process related factors generally represent treatments physicians administer • Comparing these treatments in terms of outcomes = Comparative effectiveness research

  50. Conclusions • Outcomes • The clearest ‘performance measure’, though not the entire picture • Risk adjustment, statistical methods paramount

More Related