1 / 64

PA 509-Quality Control in Healthcare Second Semester 1439/ 1440 Mohammed S. Alnaif, Ph.D.

King Saud University College of Business Administration Department of Health Administration - Masters` Program. PA 509-Quality Control in Healthcare Second Semester 1439/ 1440 Mohammed S. Alnaif, Ph.D. E-mail: alnaif@ksu.edu.sa. Measuring and Improving Patient Experience of Care.

iliana
Download Presentation

PA 509-Quality Control in Healthcare Second Semester 1439/ 1440 Mohammed S. Alnaif, Ph.D.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. King Saud UniversityCollege of Business AdministrationDepartment of Health Administration - Masters` Program PA 509-Quality Control in Healthcare Second Semester 1439/ 1440 Mohammed S. Alnaif, Ph.D. E-mail: alnaif@ksu.edu.sa Mohammed S Alnaif

  2. Measuring and Improving Patient Experience of Care Honest criticism is hard to take, particularly from a relative, a friend, an acquaintance, or a stranger. Franklin P. Jones

  3. Measuring and Improving Patient Experience of Care According to Gerteis and colleagues (1993), Quality in health care has two dimensions: • Technical excellence: the skill and competence of health professionals and the ability of diagnostic or therapeutic equipment, procedures, and systems to accomplish what they are meant to accomplish, reliably and effectively. • The other dimension relates to the subjective experience, and in health care, it is quality in this subjective dimension that patients experience most directly—in their perception of illness or well-being and in their encounters with health care professionals and institutions, i.e., the experience of illness and healthcare through the patient’s eyes.

  4. Measuring and Improving Patient Experience of Care Quality in health care has two dimensions: • Health care professionals and managers are often uneasy about addressing this “soft” subject, given the hard, intractable, and unyielding problems of financing, access, and clinical effectiveness in health care. • But the experiential dimension of quality is not trivial. It is the heart of what patients want from health care—enhancement of their sense of well-being and relief from their suffering. • Any health care system, however it may be financed or structured, must address both aspects of quality to achieve legitimacy in the eyes of those it serves.

  5. Measuring and Improving Patient Experience of Care Quality in health care has two dimensions: • Patient satisfaction or patient experience-of-care surveys are the most common method used to evaluate quality from the patient’s perspective. • Picker Institute—set out to explore patients’ needs and concerns, as patients themselves define them, to inform the development of new surveys that could be linked to quality improvement efforts to enhance the patient’s experience of care.

  6. Measuring and Improving Patient Experience of Care Quality in health care has two dimensions: Through extensive interviews and focus groups with diverse patients and their families, the research program defined eight dimensions of measurable patient-centered care: • Access to care • Respect for patients’ values, preferences, and expressed needs • Coordination of care and integration of services • Information, communication, and education • Physical comfort • Emotional support and alleviation of fear and anxiety • Involvement of family and friends • Transition and continuity.

  7. Measuring and Improving Patient Experience of Care Quality in health care has two dimensions: • An important design feature of these survey instruments is the use of a combination of reports and ratings to assess patients’ experiences within important dimensions of care, their overall satisfaction with services, and the relative importance of each dimension in relation to satisfaction. • In focus groups of healthcare managers, physicians, and nurses that were organized to facilitate the design of “actionable” responses, complaints about the difficulty of interpreting patients’ satisfaction ratings came up repeatedly.

  8. Measuring and Improving Patient Experience of Care Quality in health care has two dimensions: • Clinicians and managers expressed well-founded concern about the inherent bias in ratings of satisfaction and asked for more objective measures describing what did and did not happen from the patient’s perspective. • The end result has been the development of questions that enable patients to report their care experiences. • For example, a report-style question asks, “Did your doctor explain your diagnosis to you in a way you could understand?” instead of “Rate your satisfaction with the quality of information you received from your doctor.”

  9. Measuring and Improving Patient Experience of Care Quality in health care has two dimensions: • Collecting patient experience-of-care data is becoming a standard evaluation measure in the education and certification of medical, nursing, and allied health students. • The American College of Graduate Medical Education has incorporated extensive standards into its requirements for residency training that focus on the doctor–patient relationship, and the American Board of Internal Medicine is piloting patient experience-of-care surveys for incorporation into the recertification process for Board-certified physicians.

  10. Measuring and Improving Patient Experience of Care Using Patient Feedback for Quality Improvement • Although nationally standardized instruments and comparative databases are essential for public accountability and benchmarking, measurement for the purposes of monitoring quality improvement interventions does not necessarily require the same sort of standardized data collection and sampling.

  11. Measuring and Improving Patient Experience of Care Using Patient Feedback for Quality Improvement • Many institutions prefer more frequent feedback of results (e.g., quarterly, monthly, weekly), with more precise, in-depth sampling (e.g., at the unit or clinic level) to target areas that need improvement. • Staff usually are eager to obtain data frequently, but the cost of administration and the burden of response on patients must be weighed against the knowledge that substantial changes in scores usually take at least a quarter, if not longer, to appear in the data.

  12. Measuring and Improving Patient Experience of Care Survey Terminology • Familiarity with terms describing the psychometric properties of survey instruments and methods for data collection can help an organization choose a survey that will provide it with credible information for quality improvement. • There are two different and complementary approaches to assessing the reliability and validity of a questionnaire:

  13. Measuring and Improving Patient Experience of Care Survey Terminology Two different and complementary approaches: • Cognitive testing, which bases assessments on feedback from interviews with people who are asked to react to the survey questions; and • Psychometric testing, which bases assessments on the analysis of data collected by the questionnaire. • Much of the standardization of survey instruments and processes has occurred as a result of the Consumer Assessment of Healthcare Providers and Systems (CAHPS)

  14. Measuring and Improving Patient Experience of Care Survey Terminology Two different and complementary approaches: • The cognitive testing method provides useful information on respondents’ perceptions of the response task, how respondents recall and report events, and how they interpret specified reference periods. • It also helps identify words that can be used to describe healthcare providers accurately and consistently across a range of consumers (e.g., commercially insured, Medicaid, fee for service, managed care; lower socioeconomic status, middle socioeconomic status; low literacy, high literacy).

  15. Measuring and Improving Patient Experience of Care Survey Terminology Two different and complementary approaches: • For example, in the cognitive interviews to test CAHPS, researchers learned that parents did not think pediatricians were primary care providers. • They evaluated the care they were receiving from pediatricians in the questions about specialists, not primary care doctors. • Survey language was amended to ask about “your personal doctor,” not “your primary care provider,” as a result of this discovery.

  16. Measuring and Improving Patient Experience of Care Validity • In conventional use, the term validity refers to the extent to which an empirical measure accurately reflects the meaning of the concept under consideration. • In other words, validity refers to the degree to which the measurement made by a survey corresponds to some true or real value. For example, a bathroom scale that always reads 185 pounds is reliable, but it is not valid if the person does not weigh 185 pounds.

  17. Measuring and Improving Patient Experience of Care Validity There are different types of validity • Face validity is the agreement between empirical measurers and mental images associated with a particular concept. • Does the measure look valid to the people who will be using it? • A survey has face validity if it appears on the surface to measure what it has been designed to measure.

  18. Measuring and Improving Patient Experience of Care Validity There are different types of validity • Construct validity is based on the logical relationships among variables (or questions) and refers to the extent to which a scale measures the construct, or theoretical framework, it is designed to measure (e.g., satisfaction). • Valid questions should have answers that correspond to what they are intended to measure. • Researchers measure construct validity by testing the correlations between different items and other established constructs.

  19. Measuring and Improving Patient Experience of Care Construct validity • Because there is no objective way of validating answers to the majority of survey questions, researchers can assess answer validity only through their correlations with other answers a person gives. • We would expect high convergent validity, or strong correlation, between survey items such as waiting times and overall ratings of access. • We would expect discriminant validity, or little correlation, between patient reports about coordination of care in the emergency department (ED) and the adequacy of pain control on an inpatient unit.

  20. Measuring and Improving Patient Experience of Care Content validity refers to the degree to which a measure covers the range of meanings included within the concept. • A survey with high content validity would represent topics related to satisfaction in appropriate proportions. • For example, we would expect an inpatient survey to have a number of questions about nursing care, but we would not expect a majority of the questions to ask about telephone service in the patient’s room.

  21. Measuring and Improving Patient Experience of Care Criterion validity refers to whether a newly developed scale is strongly correlated with another measure that already has been demonstrated to be highly reliable and valid. Criterion validitycan be viewed as how well a question measures up to a gold standard. For example, if you wanted to ask patients about the interns and residents who cared for them, you would want to be sure that patients could distinguish between staff and trainee physicians. You could measure the criterion validity of questions that ask about the identity of physicians by comparing patients’ answers to hospital records

  22. Measuring and Improving Patient Experience of Care Discriminant validity is the degree of difference between survey results when the scales are applied in different settings. Survey scores should reflect differences among different institutions, where care is presumably different. Discriminant validity is the extent to which groups of respondents who are expected to differ on a certain measure do in fact differ in their answers.

  23. Measuring and Improving Patient Experience of Care Reliability • Reliability is a matter of whether a particular technique applied repeatedly to the same object yields the same results each time. The reliability of a survey instrument is initially addressed during the questionnaire development phase. • An instrument is reliable if consistency across respondents exists (i.e., the questions mean the same thing to every respondent). • This consistency will ensure that differences in answers can be attributed to differences in respondents or their experiences

  24. Measuring and Improving Patient Experience of Care Reliability • Instrument reliability, or the reliability of a measure, refers to the stability and equivalence of repeated measures of the same concept. • In other words, instrument reliability is the consistency of the answers people give to the same question when they are asked it at different points in time, assuming no real changes have occurred that should cause them to answer the questions differently

  25. Measuring and Improving Patient Experience of Care Reliability • Instrument reliability, thus, reliability is also the degree to which respondents answer survey questions consistently in similar situations. • Inadequate wording of questions and poorly defined terms can compromise reliability. • The goal is to ensure (through pilot testing) that questions mean the same thing to all respondents.

  26. Measuring and Improving Patient Experience of Care Reliability • The test–retest reliability coefficient is a method to measure instrument reliability. • This method measures the degree of correspondence between answers to the same questions asked of the same respondents at different points in time. • If there is no reason to expect the information to change (and the methodology for obtaining the information is correct), the same responses should result at all points in time. • If answers vary, the measurement is unstable and thus unreliable.

  27. Measuring and Improving Patient Experience of Care Reliability • Internal consistency is the inter-correlation among a number of different questions intended to measure (or reflect) the same concept. • The internal consistency of a measurement tool may be assessed using Cronbach’s alpha reliability coefficient. • Cronbach’s alpha tests the internal consistency of a model or survey. Sometimes called a scale reliability coefficient, Cronbach’s alpha assesses the reliability of a rating summarizing a group of test or survey answers that measure some underlying factor (e.g., some attribute of the test taker).

  28. Measuring and Improving Patient Experience of Care Reliability Readability of Survey Instruments • The readability of survey questions has a direct effect on the reliability of the instrument. • Unreliable survey questions use words that are ambiguous and not universally understood. • No simple measure of literacy exists. • The spelling and grammar checker can calculate the Flesch-Kincaid index for any document, including questionnaires.

  29. Measuring and Improving Patient Experience of Care Reliability Readability of Survey Instruments • The Flesch-Kincaid index (Flesch 1948) is a formula that uses sentence length (words per sentence) and complexity, along with the number of syllables per word, to derive a number corresponding to grade level. • Documents containing shorter sentences with shorter words have lowerFlesch-Kincaid scores.

  30. Measuring and Improving Patient Experience of Care Weighting Survey Results • Weighting of scores is frequently recommended if members of a (patient) population have unequal probabilities of being selected for the sample. • If necessary, weights are assigned to the different observations to provide a representative picture of the total population. • The weight assigned to a particular sample member should be the inverse of its probability of selection.

  31. Measuring and Improving Patient Experience of Care Weighting Survey Results • Weighting should be considered when an unequal distribution of patients exists by discharge service, nursing unit, or clinic. • When computing an overall score for a hospital or a group of clinics with an unequal distribution of patients, weighting by probability of selection is appropriate. • The probability of selection is estimated by dividing the number of patients sampled by the total number of patients.

  32. Measuring and Improving Patient Experience of Care Weighting Survey Results • When the probabilityof selection of patients from different services or units is equal, patients from different services or units will be represented in the sample in the same proportion they occur in the population. • If the probability of selection of patients from different hospitals or medical groups is different, the sample size for different hospitals or medical groups will vary according to the number of total discharges from each.

  33. Measuring and Improving Patient Experience of Care Weighting Survey Results • Similarity—presenting results stratifiedbyservice, unit, or clinic— provides an accurate and representative picture of the total population. For example, the most straightforward method for comparing units to an overall score is to compare medical units to all medical patients, surgical units to all surgical patients, and childbirth units to all childbirth patients.

  34. Measuring and Improving Patient Experience of Care Weighting Survey Results • The weighting issue also arises when comparing hospitals or clinics within a system. • If the service case mix is similar, we can compare by hospital without accounting for case-mix difference. • If service case mix is not similar across institutions, scores should be weighted before comparisons are made among hospitals. • Alternatively, comparisons could be made at the service level.

  35. Measuring and Improving Patient Experience of Care Response Rates • Low response rates compromise the internalvalidity of the sample. • Survey results based on response rates of 30 percent or less may not be representative of patient satisfaction (at that institution). • Althougha representative sample is chosen, certain population groups are more likely to self-select out of the survey process. • An expected (and typical) response bias is seen in all mailed surveys. • For example, young people and insured patients are less likely to respond to mailed surveys.

  36. Measuring and Improving Patient Experience of Care Response Rates An optimal response rate is necessary to have a representative sample; therefore, boosting response rates should be a priority. Methods to improve response rates include • Making telephone reminder calls for certain types of surveys; • Using the Dillman(1978) method, a three-wave mailing protocol designed to boost response rates; • Ensuring that telephone numbers or addresses are drawn from as accurate a source as possible; and • Offering incentives appropriate for the survey population (e.g., drugstore coupons, free parking coupons).

  37. Measuring and Improving Patient Experience of Care Survey Bias • Bias refers to the extent to which survey results do not accurately represent a population. • Conducting a perfectly unbiased survey is impossible. • Considering potential sources of bias during the survey design phase can minimize its effect. • The potential biases in survey results should be considered as well.

  38. Measuring and Improving Patient Experience of Care Sampling Bias • All patients who have been selected to provide feedback should have an equal opportunity to respond. • Any situation that makes certain patients less likely to be included in a sample leads to bias. • For example, patients whose addresses are outdated or whose phone numbers are obsolete or incomplete in the database are less likely to be reached. Up-to-date patient lists are essential. • Survey vendors also can minimize sampling bias through probability sampling—that is, giving all patients who meet the study criteria an opportunity to be included in the sample.

  39. Measuring and Improving Patient Experience of Care Nonresponse Bias • In every survey, some people agree to be respondents but do not answer every question. • Three categories of patients selected to be in the sample do not actually provide data: • Patients whom the data collection procedures do not reach, thereby not giving them a chance to answer questions. • Patients asked to provide data that refuse to do so. • Patients asked to provide data that are unable to perform the task required of them.

  40. Measuring and Improving Patient Experience of Care Administration Method Bias or Mode effects • The way a survey is administered inevitably introduces bias. • Comparisonof data that have been collected using different modes of administration (e.g., mail and telephone) will reveal differences that are either real or the result of different modes of administration. • An instrument that produces comparable data regardless of mode effect introduces no bias.

  41. Measuring and Improving Patient Experience of Care Administration Method Bias or Mode effects • For example, patients who are not literate or do not have a mailing address are excluded from mail surveys. • People who do not have phones introduce bias in telephone surveys. • In face-to-face interviews, interviewers can influence respondents by their body language and facial expressions. • In surveys conducted at the clinic or hospital, respondents may be reluctant to answer questions candidly. • A combination of methods, such as phone follow-up to mailed surveys or phone interviews for low-literacy patients, can reduce some of these biases.

  42. Measuring and Improving Patient Experience of Care Proxy-response Bias • Studies comparing self-reports with proxy reports do not consistently support the hypothesis that self-reports are more accurate than proxy reports. • However, conclusions drawn from studies in which responses were verified using hospital and physician records show that, on average, • Self-reportstend to be more accurate than proxy reports and • Health events are under- reported in both populations. In terms of reporting problems with care, most studies comparing proxy responses to patients’ responses show that proxies tend to report more problems with care than patients do.

  43. Measuring and Improving Patient Experience of Care Recall Bias • Typically, patients receive questionnaires from two weeks to four months after discharge from the hospital. • This delay raises concern about the reliability of the patient’s memory. • Memory studies have shown that the greater the effect of the hospitalization and the nature of the condition are, the greater the patient’s ability is to recall health events. • For ambulatory surveys, patients should be surveyed as soon after the visit or event as possible.

  44. Measuring and Improving Patient Experience of Care Case-Mix Adjustment • Case-mix adjustment accounts for the different types of patients in institutions. • Adjustments should be considered when hospital survey results are being released to the public. • The characteristics commonly associated with patient reports on quality of care are: • Patient age (i.e., older patients tend to report fewer problems with care) and • Discharge service (e.g., childbirth patients evaluate their experience more favorably than do medical or surgical patients; medical patients report the most problems with care).

  45. Measuring and Improving Patient Experience of Care Scope and use of Patient experiences in Healthcare Customer Service and Patient Satisfaction • Healthcare organizations should pay attention to customer service for several reasons. • First, better service translates into higher satisfaction for the patient and, subsequently, for the employer who pays most of the bills. • Second, as in any other service industry, a satisfied (and loyal) member or patient creates value over the course of a lifetime.

  46. Measuring and Improving Patient Experience of Care Scope and use of Patient experiences in Healthcare Customer Service and Patient Satisfaction • In the context of healthcare, this value may manifest itself in the form of repeat visits, trusting relationships, and positive word of mouth. • A dissatisfied member or patient, on the other hand, generates potential new costs. • Third, existing patients and members are an invaluable source of information healthcare organizations can use to learn how to improve what they do and reduce waste by eliminating services that are unnecessary or not valued.

  47. Measuring and Improving Patient Experience of Care Scope and use of Patient experiences in Healthcare Customer Service and Patient Satisfaction • Exhibit 9.1 depicts the relationship between satisfaction and loyalty. • Individuals who are the most satisfied have the highest correlation to loyalty to a product, service, or provider (the zone of affection). • Accordingly, individuals who are the most dissatisfied have the highest correlation to abandonment of their current service, product, or provider (the zone of defection). • The zone of indifference reflects the greatest percentage of people who are neither highly satisfied (loyal) nor highly dissatisfied (disloyal).

  48. Measuring and Improving Patient Experience of Care Customer Service and Patient Satisfaction • Healthcare organizations also need to pay attention to customer service because service quality and employee satisfaction go hand in hand. • When employee satisfaction is low, satisfied patients are almost impossible to find. • Employees often are frustrated and angry about the same issues that bother patients and members: chaotic work environments, poor systems, and ineffective training. • The real cost of high turnover may not be the replacement costs of finding new staff but the expenses associated with lost organizational knowledge, lower productivity, and decreased customer satisfaction.

  49. Measuring and Improving Patient Experience of Care Achieving Better Customer Service Experts on delivering superior customer service suggest that health- care organizations adopt the following set of principles: • Hire service-savvy people. Aptitude is everything; people can be taught technical skills. • Establish high standards of customer service. • Help staff hear the voice of the customer. • Remove barriers so that staff can serve customers. • Design processes of care to reduce patient and family anxiety and thus increase satisfaction. • Help staff cope better in a stressful atmosphere. • Maintain a focus on service.

More Related