1 / 22

Measuring Patients’ Experiences With Individual Physicians: Are We Ready for Primetime?

Measuring Patients’ Experiences With Individual Physicians: Are We Ready for Primetime?. ___________________________________________________________________________. Dana Gelb Safran, ScD The Health Institute Institute for Clinical Research and Health Policy Studies

hedya
Download Presentation

Measuring Patients’ Experiences With Individual Physicians: Are We Ready for Primetime?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring Patients’ Experiences With Individual Physicians: Are We Ready for Primetime? ___________________________________________________________________________ Dana Gelb Safran, ScD The Health Institute Institute for Clinical Research and Health Policy Studies Tufts-New England Medical Center Presented at: Academy Health Annual Research Meeting San Diego, CA 7 June 2004 Commonwealth Fund and Robert Wood Johnson Foundation

  2. Focusing on Physicians • Survey-based measurement of patients’ experiences with individual physicians is not new. • What’s new: Efforts to standardize and potential for public reporting. • IOM report Crossing the Quality Chasm gave “patient-centered care” a front row seat. • Methods and metrics have been honed through 15 years of research and through several recent large-scale demonstration projects • But putting these measures to use raises many questions about feasibility and value.

  3. Ambulatory Care Experiences Survey Project • Statewide demonstration project in Massachusetts • Collaboration: • 6 Payers • 6 Physician Network Organizations • Massachusetts Medical Society • Massachusetts Health Quality Partners • Testing the feasibility and value of measuring patients’ experiences with individual primary care physicians and practices • Primary impetus: plans seeking to standardize surveys • IOM “Chasm” report further propelled the work

  4. Principal Questions of the Statewide Pilot • What sample size is needed for highly reliable estimate of patients’ experiences with a physician? • What is the risk of misclassification under varying reporting frameworks? • Is there enough performance variability to justify measurement? • How much of the measurement variance is accounted for by physicians as opposed to other elements of the system (practice site, network organization, plan)?

  5. Sampling Framework Eastern, MA Central, MA Western, MA Tufts, BCBSMA, HPHC, Medicaid BCBSMA, Fallon, Medicaid BCBSMA, HNE, Medicaid PNO4 PNO6 PNO5 PNO1 PNO2 PNO3 10 Sites 37 Physicians 23 Sites 35 Physicians 34 Sites 143 Physicians Both commercially insured & Medicaid patients sampled Only commercially insured patients sampled

  6. Measures from the Ambulatory Care Experiences Survey (ACES) Organizational Access Trust Continuity ·longitudinal ·visit-based Interpersonal Treatment PrimaryCare Comprehensiveness ·whole-person orientation ·health promotion/ patient empowerment Communication • Integration • team • specialists • lab

  7. Sample Size Requirements for Varying Physician-Level Reliability Thresholds Sample Size Requirements for Varying Physician-Level Reliability Thresholds

  8. What is the Risk of Misclassification? • Not simply 1- MD • Depends on: • Measurement reliability (MD) • Proximity of score to the cutpoint • Number of cutpoints in the reporting framework

  9. Risk of Misclassification at Varying Distances from the Benchmark and Varying in Measurement Reliability (MD )

  10. Certainty and Uncertainty in Classification Comparison with a Single Benchmark Significantly above Significantly below 6.3 αMD=0.7 4.9 αMD=0.8 3.26 αMD=0.9 0 65 100 50th p’tile = area of uncertainty

  11. Certainty and Uncertainty in Classification Cutpoints at 10th & 90th Percentile Bottom Tier Middle Tier Top Tier 6.3 6.3 αMD=0.7 4.9 4.9 αMD=0.8 3.26 3.26 αMD=0.9 0 53 76 100 10th p’tile 90th p’tile = area of uncertainty

  12. Variability Among Physicians (Communication) ___________________________________________________________________________ MD Mean Score, % Number of Doctors

  13. Variability Across Practice Sites (Communication) Group Mean Score, % Eastern Region Central Region Western Region 25th-75th percentile range of group scores Group Mean score

  14. Variability Among Physicians within Sites (Communication) Site and MD Mean Score Site A-1 Site A-2 Site A-3 Site A-4 25th-75th percentile range of MD scores 25th-75th percentile range of site scores Site Mean score MD Mean score

  15. Allocation of Explainable Variance: Doctor-Patient Interactions Communication Patient trust Health promotion Interpersonal treatment Whole-person orientation

  16. Allocation of Explainable Variance: Organizational/Structural Features of Care

  17. Summary and Implications • With sample sizes of 45 patients per physician, most survey-based measures achieved physician-level reliability of .7-.85. • With a 3-level reporting framework, risk of misclassification is low – except at the boundaries, where risk is high irrespective of measurement reliability. • Individual physicians and practice sites accounted for the majority of system-related variance on all measures. • Within sites, variability among physicians was substantial.

  18. Summary and Implications (cont’d) • Feasibility of obtaining highly reliable measures of patients’ experiences with individual physicians and practices has been demonstrated. • The merits and value of moving quality measurement beyond health plans and network organizations is clear. • By adding these aspects of care to our nation’s portfolio of quality measures, we may reverse declines in interpersonal quality of care.

  19. Certainty and Uncertainty in Classification Multiple Cutpoints Bottom Tier 2nd Tier 3rd Tier Top Tier 6.3 6.3 6.3 αMD=0.7 4.9 4.9 4.9 αMD=0.8 3.26 3.26 3.26 αMD=0.9 0 53 65 76 100 10th p’tile 50th p’tile 90th p’tile = area of uncertainty

More Related