1 / 16

Area 4 SHARP Face-to-Face Conference Phenotyping Team – Centerphase Project

Area 4 SHARP Face-to-Face Conference Phenotyping Team – Centerphase Project Assessing the Value of Phenotyping Algorithms June 30, 2011. Topics. Centerphase Background Project Overview Hypothesis Research Design Results to-date Next Steps. Centerphase: Background.

nyla
Download Presentation

Area 4 SHARP Face-to-Face Conference Phenotyping Team – Centerphase Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Area 4 SHARP Face-to-Face Conference Phenotyping Team – Centerphase Project Assessing the Value of Phenotyping Algorithms June 30, 2011

  2. Topics Centerphase Background Project Overview Hypothesis Research Design Results to-date Next Steps

  3. Centerphase: Background • CENTERPHASE SOLUTIONS, INC. is a technology-driven services company formed through a collaboration with Mayo Clinic in 2010 • The goal is to leverage electronic medical records (EMRs) and clinical expertise from academic medical centers and other research sites to address a broad array of healthcare opportunities • The initial focus is to support enhanced design, planning and execution of clinical trials • Future areas include comparative effectiveness, pharmacoeconomics, compliance and epidemiological studies • Centerphase’s role on the Phenotyping Team is to evaluate the effectiveness (cost and time) of using phenotyping algorithms for identifying patient cohorts

  4. It’s About Speed AND Accuracy…

  5. Hypothesis The development of phenotyping algorithms and tools can reduce time and cost while maintaining or enhancing quality, associated with identifying patient cohorts for multiple secondary uses including clinical trials and care management.

  6. Approach • Choose a use case that can provide valuable insights into a real world application • Develop a phenotyping methodology (“flowchart”) to identify the patient cohort • Generate a random sample of patients from Mayo EMR system based on ICD9-code • Conduct algorithm-driven and manual processes in parallel on the sample • Compare the time, cost and accuracy of results from the algorithm-driven to manual process

  7. Initial Use Cases Diabetes is a growing epidemic in this country: 25.8 million (8.3% of population) have diabetes. Last year, 1.9 million new cases alone in population of 20 years or older (CDC). Diagnosed and Undiagnosed Diabetes Source: 2005–2008 National Health and Nutrition Examination Survey Type II Diabetes Mellitus (T2DM):90-95% of all adult cases of diabetes Multi-stage phenotyperepresenting a combined adaptation of: • The eMERGE Northwestern T2DM algorithm for clinical trial selection and • The group practice reporting options (GPRO) as defined under NCQA for population management under the Southeast Minnesota Beacon project

  8. Use Cases Case 1: Care Management Identify all high risk patients in a pool of 500 cases Case 2: Clinical Trial Identify patients that are good candidates for a study

  9. Phenotype Methodology T2DM ICD9 Code eMERGE Algorithm for T2DM Screen 1: Age Screen 2: Medications Identified as T2DM patient Blood Glucose: HbA1c > or = 9 Cholesterol: LDL > 130 Blood pressure: Systolic > 160 & Diastolic > 100 Screen 3: Labs & Vitals If ANY of the most recent values exceed allowable levels OR ANY of these elements has not been captured in the measurement period, patient is classified as high risk or RED Beacon Criteria for categorizing patient risk Identified as high risk or “RED” patient Patient Cohort Note: All screens based on two-year measurement period 1/1/09 – 12/31/10

  10. Research Design Randomly generate ONE sample set of patient records from database: Based on T2DM ICD9 codes from at least 2 visits during measurement period Sample Patient Records Manual Process Algorithm-Driven Process Study coordinator (SC) conducts manual review of patient charts, and monitors activity time Programmer develops and runs algorithm to query records, and monitors development and run time Screens 1 -3 Screens 1 -3 Patient Result Set Patient Result Set Compare time, cost and accuracy of results

  11. Validation and Evaluation Process Step 2 Step 1 500 Charts “Dry Run” 20 Charts • Review each chart for manual and algorithm processes to identify any screening errors • Confirm approaches are consistent • Refine procedures as appropriate • Start with 50; review results and adjust if necessary • Complete manual reviews • Collect time, cost and patient result sets • Conduct data queries • Analyze results and evaluate / compare performance of methods Step 1 Completed Step 2 Underway

  12. And How Did We Do…

  13. Initial Results • 50 Charts reviewed • Manual process* • Identified 10 “Red” (high risk) patients • Required 11.5 total hours** • Algorithm-driven process* • Identified 8 “Red” patients • All 8 were identified in manual process • Missed 2 patients (false negatives) • Required 7.4 hours** • For the purposes of this presentation, the following analysis extrapolated these results to evaluate the impact on 500 patients…. Actual findings will be reported upon completion of 500 charts * Currently evaluating accuracy of both manual and algorithm-driven processes ** Includes time for manual validation of all Red charts

  14. Preliminary Analysis: Case 1 - Care Management Extrapolated to 500 Charts based upon Initial 50 Charts Note: Costs and hours reflect time for secondary manual validation of all Red charts identified through both processes Algorithm 90% Faster Algorithm 80% Less Costly

  15. Preliminary Analysis: Case 2 – Clinical TrialsExtrapolated to 500 Charts based upon Initial 50 Charts Note: Costs and hours reflect time for secondary manual validation of all Red charts identified through both processes Preliminary Comparison: Algorithm-driven to Manual process • 80% fewer charts to review • Over 30 hours saved • Almost 50% cost savings Manual Method Algorithm Method

  16. Preliminary Conclusions and Next Steps • Initial takeaways: • If extrapolated results are validated…. • Applying algorithms to identify subsets of patients can save time and be cost effective • Algorithms can be most effective when search for larger numbers of patients • More work needs to evaluate relative accuracy Sample Patient Records Manual Process Algorithm-Driven Process Screens 1 -3 Screens 1 -3 Patient Result Set Patient Result Set • Next steps: • Complete review of 500 charts • Document results in white paper or manuscript

More Related