1 / 47

Collaborative Family Healthcare Association 17 th Annual Conference

Session # B5 October 17, 2015. Primary Care Behavioral Health Clinical and Process Outcomes: Program Evaluation in the Department of Defense. Anne C. Dobmeyer, PhD, ABPP, Chief Psychologist, PCBH Directorate, Deployment Health Clinical Center

lgoodwin
Download Presentation

Collaborative Family Healthcare Association 17 th Annual Conference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Session # B5 October 17, 2015 Primary Care Behavioral Health Clinical and Process Outcomes: Program Evaluation in the Department of Defense Anne C. Dobmeyer, PhD, ABPP, Chief Psychologist, PCBH Directorate, Deployment Health Clinical Center Christopher L. Hunter, PhD, ABPP, DoD Program Manager for Behavioral Health in Primary Care Jennifer L. Bell, MD, Associate Director, PCBH Directorate, Deployment Health Clinical Center Collaborative Family Healthcare Association 17th Annual Conference October 15-17, 2015 Portland, Oregon U.S.A.

  2. Faculty Disclosure The presenters of this session have NOT had any relevant financial relationships during the past 12 months.

  3. Learning ObjectivesAt the conclusion of this session, the participant will be able to: • Identify two process metrics useful to assess as part of a primary care behavioral health (PCBH) program evaluation • Identify two outcome metrics useful to assess as part of a PCBH health program evaluation • Describe one barrier to PCBH program evaluation in large medical systems and one strategy for addressing the barrier

  4. Bibliography / References • Bryan, C. J., Corso, M. L., Corso, K. A., Morrow, C. E., Kanzler, K. E., & Ray-Sannerud, B. (2012). Severity of mental health impairment and trajectories of improvement in an integrated primary care clinic. Journal of Consulting and Clinical Psychology, 80, 396-403. • Hunter, C. L., & Goodie, J. L. (2012). Behavioral health in the Department of Defense Patient-Centered Medical Home: history, finance, policy, work force development, and evaluation. Translational Behavioral Medicine, 2, 355-363. • Hunter, C. L., Goodie, J. L., Dobmeyer, A. C., & Dorrance, K. A. (2014). Tipping points in the Department of Defense’s experience with psychologists in primary care. American Psychologist, 69 (4), 388-398. • Peek, C. J., Cohen, D. J., & deGruy, F. V. (2014). Research and evaluation in the transformation of primary care. American Psychologist, 69, 430-442. • Ray-Sannerud, B. N., Dolan, D. C., Morrow, C. E., Corso, K. A., Kanzler, K. E., Corso, M. L., & Bryan, C. J. (2012). Longitudinal outcomes after brief behavioral health intervention in an integrated primary care clinic. Families, Systems, and Health, 30, 60-71.

  5. Learning Assessment • A learning assessment is required for CE credit. • A question and answer period will be conducted at the end of this presentation.

  6. Disclaimer The views expressed are those of the authors and do not reflect the official policy of the Department of Defense (DoD), the United States Public Health Service (USPHS) or the U.S. Government.

  7. Acknowledgements The authors would like to acknowledge Justin Curry, PhD, who also contributed to this presentation.

  8. Overview • Primary Care Behavioral Health (PCBH) program evaluation in the Department of Defense (DoD) • Context • Evaluation plan • Early outcomes • Barriers and solutions • Application to other settings: Monitoring and evaluation (M&E) • Overview of M&E concepts • Monitoring and evaluating local innovations/programs • Potential areas to monitor/evaluate • Worksheet and Discussion

  9. Context: PCBH in the DoD • DoD Military Health System (MHS) serves 3.3 million beneficiaries in military facility primary care (PC) clinics¹ • DoD policy requires all PC clinics with > 3,000 enrollees have a full-time behavioral health consultant (BHC) using PCBH model¹ • 313 BHCs; 313 clinics • Program evaluation is facilitated by use of the DoD Electronic Health Record (EHR) • Some data (demographics, visit type and length, procedure coding, diagnosis) can be extracted from EHR • Additional PCBH module in EHR allows extraction of additional data • ¹Hunter, C. L., Goodie, J. L., Dobmeyer, A. C., & Dorrance, K. A. (2014). Tipping points in the Department of Defense’s experience with psychologists in primary care. American Psychologist, 69 (4), 388-398.

  10. Evaluation Plan • Excerpted from: Hunter, C. L., & Goodie, J. L. (2012). Behavioral health in the Department of Defense Patient-Centered Medical Home: history, finance, policy, work force development, and evaluation. Translational Behavioral Medicine, 2, 355-363.

  11. Evaluation Plan (cont’d) • Excerpted from: Hunter, C. L., & Goodie, J. L. (2012). Behavioral health in the Department of Defense Patient-Centered Medical Home: history, finance, policy, work force development, and evaluation. Translational Behavioral Medicine, 2, 355-363.

  12. Early Outcomes • Monitoring period: Fiscal Year 2015 Quarter 1 • Oct 1, 2014 – Dec 31, 2014 • Patient scores also gathered from prior 3 months (Jul – Sep 2014) • Quantity and penetration of BHC services • BHCs saw 23,068 unique patients during 3 month monitoring period • This represents 1.66% of all patients seen in primary care during the monitoring period • BHCs conducted 38,855 unique encounters • This represents 1.20% of all primary care encounters

  13. Early Outcomes: Fidelity to Model • Approximately half of BHC patients were seen only once (during 6 month window) • 54.7% • 12,612 patients • Nearly all BHC patients were seen 4 or fewer times • 92.8% • 21,403 patients • Very few BHC patients were seen 9 or more times • 1.2% • 277 patients

  14. Early Outcomes: Fidelity to Model (cont’d) • Feedback given to PCP at every visit (Goal = 100%) • 54.5% • 12,573 • Questions raised • Is feedback occurring, but not documented in the appropriate place? • Are there differences based on Service? • Yes…. Range from 42% to 75% • Suggests variability in training and leadership

  15. Early Outcomes: Fidelity to Model (cont’d) • Required outcome measure administered (Goal = 100%) • 46.9% • 10,820 • Questions raised • Did policy change affect outcome? (The required outcome measure changed from DUKE to BHM-20 at the start of the monitoring period) • Is measure being administered but not documented in appropriate location? • Are there differences based on Service? • Yes…. Range from 30% to 65% • Suggests variability in training and leadership

  16. Early Outcomes: Clinical • BHM-20 Global Mental Health Scale • Composite score including life satisfaction, psychological symptoms, social relations, and life functioning • Compared initial and last BHM-20 administrations (paired samples t-test) • Only 2,667 patients (11.6%) had both 2 or more appointments and 2 or more administrations of BHM-20 • Not a true random sample; difficult to generalize • Statistically significant improvements in BHM-20 Global Mental Health scale were seen for patients with at least two appointments t(2,666)=17.04, p<.000001 • Time 1 M=2.65, SD=0.72 • Time 2 M=2.87, SD= 0.75

  17. Early Outcomes: Clinical • Depression scores measured by PHQ-9 • Categorized patients into clinical outcome trajectories based on reliable change index (RCI) • Restricted sample to those who had a depression-related diagnosis, 2 or more appointments, and at least 2 PHQ-9 scores (n=554) • 44.8% of patients (n= 248) with depression-related diagnosis demonstrated reliable improvement on PHQ-9 • Anxiety scores measured by GAD-7 • Categorized patients into clinical outcome trajectories based on RCI • Restricted sample to those who had an anxiety-related diagnosis, 2 or more appointments, and at least 2 GAD-7 scores (n=566) • 40.3% of patients (n= 228) with anxiety-related diagnosis demonstrated reliable improvement on GAD-7

  18. Early Outcomes: Clinical • Post-traumatic stress scores measured by PCL • Categorized patients into clinical outcome trajectories based on RCI • Restricted sample to those who had a PTSD diagnosis, 2 or more appointments, and at least 2 PCL scores (n=21) • Note very small sample size! Not a true random sample; difficult to generalize • 61.9% of patients (n= 13) with PTSD diagnosis demonstrated reliable improvement on PHQ-9

  19. Barriers and Solutions • What goes into the EHR does not always come out • Data entered into a non-mineable field cannot be easily retrieved! • (And of course, data that is not entered can never be retrieved) • Work with IT to improve EHR options for data mining • Train BHCs and PCPs on where and what to document • Periodic (e.g., quarterly) monitoring may not capture change over time when visits are spaced • Sequential monthly (versus weekly) appointments may not be captured in a data pull over one quarter • Consider expanding time range (e.g., 6 months) • When mean number of visits is low (between 1 to 2), assessing clinical change becomes a challenge! • Look for alternate ways of assessing impact • Consider ways to obtain follow-up data later in time (e.g., assessment measures administered at future PCP appointments)

  20. Barriers and Solutions (cont’d) • Large data sets will be inaccurate (over and over again) • Plan time and approach for substantial data scrubbing • Incorporate regular updates of new and departed personnel • Large data sets will be unwieldy to use • Identify and purchase statistical software with capability to manage large sets of data • Large data sets will yield statistical significance (even when change is not clinically meaningful or reliable) • Determine reliable change threshholds for each measure, when possible • Evaluate for clinically meaningful change

  21. Barriers and Solutions (cont’d) • Not all providers diagnose and code the same way • E.g., Consider the number of different diagnoses that may be used for presentation of depressive symptoms • Clearly define full range of acceptable codes for data pulls • Large data sets include disparate problem areas, diverse patient populations, and varying degrees of severity • This is a strength but also a challenge • Meaningful clinical change for sub-populations can get lost in larger data • Would we expect PHQ-9 scores to change for everyone? • Would we expect BHM-20 psychological health scores to change for those seen for health behavior change? • Run analyses on distinct sub-groups of interest, based on demographics, problem area/diagnosis, severity, etc.

  22. Application to Other Settings: Monitoring and Evaluation

  23. Monitoring & Evaluation Concepts • Both monitoring and evaluation (M&E) involve the collection of data • M&E is notresearch (although many research methods and practices are used) • Does not seek to make generalizable statements • Is not intended to contribute to a scientific literature • Accepts a lower standard of evidence than research • M&E isa management tool that collects and uses information to inform decision-making around… • Innovation implementation • Innovation performance • Innovation impact

  24. Monitoring & Evaluation Definitions • Monitoring • A continuous effort targeting processes and intermediate outcomes • Most useful for measuring implementation factors that influence quality or practice fidelity • What are the critical components of your innovation that, if not done correctly or on time, will threaten the desired impact? These are the things you need to monitor! • Evaluation • A more periodic effort (usually once or twice over the life of a project) • Assesses impact of an innovation and seeks to derive lessons-learned • How will you know that your innovation has been successful and how will you understand what contributed to or limited that success? These are questions that evaluations seek to answer!

  25. Measurement Burden These factors represent both an opportunity cost (i.e., time spent NOT doing patient care) and a threat to data validity • Information from M&E efforts inform decision-making • But information comes at a cost (time and resources) • Data collection • Data management • Data analysis • Measurement burden • What to measure and when? • Think about what kinds of decisions you need to make and how often you need to make them • Think about what information you need to inform your decisions • Measure only what you need to – don’t try to measure everything

  26. Monitoring & Evaluation The Science of monitoring and evaluation is measurement, the Art of M&E is efficiency.

  27. Monitoring & Evaluating Local Innovations • Define the problem • Identify what needs to be changed • Be concrete and specific • Analyze the current state • Who is involved? • What are the current practices? • What structures, policies, practices are currently limiting performance? • Define the desired end-state • Envision what the situation should look like after the problem has been resolved • Again, be concrete and specific • Write a SMART Goal

  28. Monitoring & Evaluating Local Innovations Resource Step 1a Intermediate Outcome End-State Resource Step 2 Resource Step 1b Intermediate Outcome Resource • Define the innovation • Map out the resources required and the steps that need to be taken to logically move you from current state to your desired end-state

  29. Monitoring & Evaluating Local Innovations Resource Step 1a Intermediate Outcome End-State Resource Step 2 Resource Step 1b Intermediate Outcome Resource • Identify critical steps and outcomes in your innovation model • These are the things that need to be monitored

  30. Monitoring & Evaluating Local Innovations • Define the measurement strategy • For each critical step and intermediate outcome... • Determine if information collection mechanisms already exist • If not, determine how information will be collected • Questionnaires/Surveys • Counts • Interviews • Focus Groups • Determine who will collect the information • Determine how often the information will be collected and analyzed • For the desired end-state… • Revisit your SMART Goal and determine how you will assess whether or not you’ve met your goal • Define success in terms of degree of change in the measure you will use • Measure the current state BEFORE you initiate the innovation

  31. Monitoring & Evaluation: Some Things to Keep in Mind • Complex innovations will have more complex M&E systems • Favor sufficiency over comprehensiveness in choosing measurement strategies • A 500-item survey may give you the best information • Asking your staff a couple of questions during a meeting or conducting an AAR may give you sufficient information • Ensure sufficiency and relevance of your measurement strategies • Simple methods are better, BUT… • this is only true if the simple method is sufficient to meet your information needs • Be intentional in your data collection especially when using qualitative approaches (e.g., After Action Reports or informal staff interviews) • Follow your information collection plan (follow the schedule, ask all the questions that you have mapped out)

  32. Monitoring & Evaluation: Additional Things to Keep in Mind For the most part, information collection in the service of quality assurance and program evaluation is exempt from research and survey regulations. However, large scale or sensitive information collection strategies could trigger requirements for research or survey determinations. Consult with your local IRB whenever in doubt

  33. Broad Categories to Monitor/Evaluate • Patient outcomes • Clinic-level or system outcomes • Team outcomes • Cost outcomes • Clinical variables • Process variables

  34. Potential Areas to Monitor/Evaluate

  35. Patient Outcomes: Clinical • Behavioral health symptom improvement • Scores on symptom-specific behavioral health measures • Patient Health Questionnaire-9 (PHQ-9 for depression) • Generalized Anxiety Disorder-7 (GAD-7) • PTSD Checklist (PCL) • Insomnia Severity Index (ISI) • Scores on broad-based behavioral health measures • DUKE Health Profile • Behavioral Health Measure-20 (BHM-20) • Functioning or quality of life improvement • DUKE Health Profile • BHM-20

  36. Patient Outcomes: Clinical • Health behavior improvement • Successful tobacco cessation • Decrease in at-risk alcohol use • Increase in adherence to medication as prescribed • Medical outcome improvement • BMI decrease • A1C decrease • Improved lipids • Decreased chronic pain intensity and interference (PEG-3 )

  37. Patient Outcomes: Process • Patient satisfaction • % satisfied with quality of care • % satisfied with convenience of care • % who believe that BHC helped them with their concern • % of patients who would recommend seeing a BHC to others • Patient engagement • % of patients who felt involved in treatment decision-making • % of patients who came back for a second BHC appointment (when recommended) • % of patients who complete expected number of appointments as part of a clinical pathway (e.g., 4 appointments for tobacco cessation)

  38. Practice-Level Outcomes: Process • Access to care • Patient wait times • Same day access • Proportion of eligible patients referred • Proportion of referred patients who kept BHC appointment • Utilization rates of BHC • # unique patients seen • # patient encounters • # same day patient encounters • Average # visits per patient • Utilization rates of BHC for particular conditions or issues • Average appointments per workday

  39. BHC Practice Fidelity • Appointment length • % of BHC appointments that were < 30 minutes) • Feedback to PCP • % of appointments with documentation that feedback was provided to PCP • Use of outcome measures • % of appointments in which required outcome measure documented • Population-based care • % of patients seen for more than 4 appointments (per episode of care)

  40. Cost-Related Outcomes • Reduction in referrals to specialty care • Reduction in use of emergency care services • Reduction in inpatient hospitalization rates

  41. Team Factors: Barriers to Referral • PCP perceptions of barriers to referral • Not sure how to refer • Don’t want to interrupt BHC • No time to talk to patient about BHC referral • Patient unlikely to benefit • Forget to refer • Patient refuses

  42. Team Factors: Communication • Team communication • (Existence of) daily team huddles • (Existence of) regular provider meetings including BHCs and PCPs • Ease of communicating between team members • Staff satisfaction with communication • Frequency (and helpfulness) of “curbside consults” • Frequency of face-to-face PCP feedback after BHC appointment • Use of shared care plans in EHR

  43. Team Factors: BHC Satisfaction • BHC satisfaction with core aspects of their PCBH work • Seeing many patients • Rapid pace; same-day availability • Providing services for a wide range of referral problems • Giving PCPs feedback about patients • Providing group services • Working with PCPs to develop new programs • Belief that BHC services are helpful to their patients

  44. Team Factors: PCP Satisfaction/Attitudes • PCP satisfaction with BHC services • Satisfaction with PCP’s and patients’ access to BHC • Perception of how helpful BHC is to PCP and patients • Rating of quality of BHC services • Assessment of extent BHC meets both PCP’s and patient’s needs • Overall satisfaction with BHC services • PCP attitudes towards BHC • Belief that BHC is value added • Belief that BHC is competent • Belief that a BHC knows how to deal with various conditions • Belief that BHC is capable of learning about and treating a patient with a condition unfamiliar to the BHC • Belief that the PCBH model can and will work

  45. Discussion and Worksheet

  46. Q & A/Summary • What are two process metrics useful to assess as part of a primary care behavioral health (PCBH) program evaluation? • What are two outcome metrics useful to assess as part of a PCBH health program evaluation? • What is one barrier to PCBH program evaluation in large medical systems? What is a potential strategy for addressing the barrier?

  47. Session Evaluation Please complete and return theevaluation form to the classroom monitor before leaving this session. Thank you!

More Related