1 / 15

Using Client Satisfaction Surveys as Quality and Performance Measures

The Way We Were (and Sometimes Still Are) . Professional hubris that we know better than consumers what the important dimensions of care areHigh satisfaction ratings generated by poor measures and inappropriate use of good measuresFailure to test for validity and reliability. . Where Are We Now?

serge
Download Presentation

Using Client Satisfaction Surveys as Quality and Performance Measures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Using Client Satisfaction Surveys as Quality and Performance Measures Scott Miyake Geron, Ph.D. Boston University School of Social Work POMP Grantee Training December, 2001 Good afternoon, my name is Scott Miyake Geron. I’m an Associate Professor for Social Welfare Policy and Research at Boston University School of Social Work. Indeed, as Bob has addressed, this is a time of excitement in the area of quality assessment research supported ourcomes measurement Today, will discuss some of the strategies and approaches for designing consumer centered systems of care; which I define as those systems that endeavor to determine and strive to honor the values and preferences of the consumer in all phases of service delivery. [promote highest level of independence possible consistent with their capacities and preferences for care] Then briefly describe two projects: (1) a project conducted to PDA Waiver Program of PCA to enhance consumer decision making capacity; and (2) my research to develop a psychometrically sound client satisfaction measure for home care satisfaction . Good afternoon, my name is Scott Miyake Geron. I’m an Associate Professor for Social Welfare Policy and Research at Boston University School of Social Work. Indeed, as Bob has addressed, this is a time of excitement in the area of quality assessment research supported ourcomes measurement Today, will discuss some of the strategies and approaches for designing consumer centered systems of care; which I define as those systems that endeavor to determine and strive to honor the values and preferences of the consumer in all phases of service delivery. [promote highest level of independence possible consistent with their capacities and preferences for care] Then briefly describe two projects: (1) a project conducted to PDA Waiver Program of PCA to enhance consumer decision making capacity; and (2) my research to develop a psychometrically sound client satisfaction measure for home care satisfaction .

    2. The Way We Were (and Sometimes Still Are) Professional hubris that we know better than consumers what the important dimensions of care are High satisfaction ratings generated by poor measures and inappropriate use of good measures Failure to test for validity and reliability

    3. Where Are We Now? We can develop brief valid and reliable measures of home care satisfaction that address areas of services important to consumers Home care satisfaction different from acute care or office-based satisfaction A client satisfaction measure becomes a consumer-based measure of quality when based on consumer’s point of view

    4. Home Care Satisfaction Measure (HCSM) Homemaker Services (HCSM-HM13) Home Health Aide Services (HCSM-HHA13) Home Delivered Meals (HCSM-MS11) Grocery Service (HCSM-GS10) Care Management Service (HCSM-CM13)

    6. Psychometric Properties Field test sample (N=228): > 60 years of age, English or Spanish as primary language, in receipt services at least 6 months, and cognitively intact Reliability: HCSM test-retest correlations were large and significant, ranging from .68 - .88 Validity: Correlations of HCSM satisfaction scores with independent ratings of satisfaction were large and significant, ranging from .49 - .71 Other: Client characteristics -- age, gender, race, and ADLs -- had little impact on HCSM scores, but some evidence of response sets

    7. What Have We Learned? Gender, race, health status, and psychological well-being do not appear to influence satisfaction scores Older persons of color view quality of long-term care services in much the same way as white older adults Satisfaction measures are robust with respect to mode of administration

    9. 4/26/2012 9 Often the Food is So Bad I Do Not Eat It

    10. Turning Performance Measurement into Performance Management • What does the information say? Is it understandable? Did it fit with our experience? • So what? What does the information tell us about our performance? • Now what? What do we need to do next to change/maintain our performance?

    11. Revised 1999 HCSM Benchmarks

    12. Satisfaction Benchmark The benchmark score for a service is the average satisfaction score for all agencies providing that service Benchmark boundary scores are based on statistical conventions to interpret differences in scores Sample sizes were selected that provide confidence that difference scores are not due to random error The benchmark score for a service is the average satisfaction score for all agencies providing that service There is no simple criterion to determine the importance of difference scores from individual agency and benchmark We used Cohen’s conventions to interpret differences -- based on fractions of standard deviations (SD) of a measure Our criterion -- .35 SD -- is half way between small effect size (.20 SD) and medium effect size (.50 SD) The benchmark score for a service is the average satisfaction score for all agencies providing that service There is no simple criterion to determine the importance of difference scores from individual agency and benchmark We used Cohen’s conventions to interpret differences -- based on fractions of standard deviations (SD) of a measure Our criterion -- .35 SD -- is half way between small effect size (.20 SD) and medium effect size (.50 SD)

    13. Graphical Profile of HCSM Results

    14. Displaying Comparative Case Management Outcomes

    15. Item Results

    16. Summary Program and Policy Uses Information about quality of services Statistically interpretable information General or service specific satisfaction Domain or dimension specific satisfaction Evaluation of change over time HOW IS SAT RELATED TO QUALITY? Technical dimensions of quality are service specific, but satisfaction assessments result from any dimension considered important to the consumer, service-related or not Quality judgments can be formed without experience, but satisfaction is purely experiential Satisfaction is broader than technical quality -- technical quality is one of the factors considered by consumers in making satisfaction judgments When satisfaction measures tap areas of care considered important to consumers, the measure is a consumer-based measure of service quality BUT REMEMBER: Satisfaction assessments should be used as a relative measure; used only, the results can easily be misinterpreted and misused The appropriate use and interpretation of satisfaction results is to compare individual provider results to other providers of the same services Satisfaction assessments, while obviously important, should not serve as the sole criterion of quality care or outcomes HOW IS SAT RELATED TO QUALITY? Technical dimensions of quality are service specific, but satisfaction assessments result from any dimension considered important to the consumer, service-related or not Quality judgments can be formed without experience, but satisfaction is purely experiential Satisfaction is broader than technical quality -- technical quality is one of the factors considered by consumers in making satisfaction judgments When satisfaction measures tap areas of care considered important to consumers, the measure is a consumer-based measure of service quality BUT REMEMBER: Satisfaction assessments should be used as a relative measure; used only, the results can easily be misinterpreted and misused The appropriate use and interpretation of satisfaction results is to compare individual provider results to other providers of the same services Satisfaction assessments, while obviously important, should not serve as the sole criterion of quality care or outcomes

More Related