1 / 34

A Consensus on Measuring Quality in Emergency Medicine?

A Consensus on Measuring Quality in Emergency Medicine?. Peter Cameron, President International Federation of Emergency Medicine Director, Centre for Research Excellence in Patient Safety Monash University. Why do we need to measure??. Don’t know whether there is a problem?

willis
Download Presentation

A Consensus on Measuring Quality in Emergency Medicine?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Consensus on Measuring Quality in Emergency Medicine? Peter Cameron, President International Federation of Emergency Medicine Director, Centre for Research Excellence in Patient Safety Monash University

  2. Why do we need to measure?? • Don’t know whether there is a problem? • Don’t know whether the intervention is making a difference? • Don’t know whether you are better or worse than peers? • Don’t know where to focus effort for improvement? • HAVE ALL THE EFFORTS TO IMPROVE QUALITY MADE ANY DIFFERENCE???

  3. Do we know what a high performing Emergency Department is? • There is virtually no agreement between various measures • None appears better than opinion • Have we progressed since the 19th century?

  4. Perverse Behavior is commonSetting targets can cause this

  5. We know what a bad ED looks like…

  6. Despite the “gloom” regarding credible measurement • There are many Quality initiatives around the world • Big variations in emphasis • Revenue raising vs meeting community demand • Time targets • Audits… • BUT • Many common themes

  7. IFEM • Only global umbrella organisation for EM • Given the importance of Quality and safety • Essential for involvement of IFEM • Very powerful if we can have consensus…..

  8. Framework for measuring quality and safety? • Structure • ie what physical and human resources are available. How is the health service organised. • Usually measured by Accreditation • Process • How does the health service function • Easiest to measure – but not always related to outcome • Can distract from the real issues – But… • Outcome • This is fundamental • Yet we have little risk adjusted data on this

  9. Dimensions of Quality • Access • Interminable measures • Safety?? • Effectiveness?? • Efficiency • Hard to measure when outcomes are unknown? • Appropriateness?? • Acceptability • Pt satisfaction surveys??

  10. Data can drive change • Witness • “Breakthrough” collaborations • PDSA cycles • Rely on data • Other industries • Road tolls …

  11. What is worse than no data?? • Bad data…. • Commonly stated – just collect SOME data – at least it will get them moving… (which direction??) • Data must be fit for purpose • Data can be used to screen/monitor populations • If data is to be used for benchmarking and engaging clinicians – must be credible

  12. Bad data • Disengages clinicians • Breaks reputations • Misinforms policy makers and the community • Distorts activity • Provides perverse incentives • Distorts funding GOOD DATA CAN ALSO BE USED BADLY…

  13. How do we get good data • Good data is fit for purpose • Must know why you are collecting the data • i.e. what is the question?

  14. What sources are there for measurement? • Case sampling • Unit audits/chart reviews etc. • Routine data bases • Incident monitoring • Mortality reviews – preventable deaths • Emergency Department/Admissions data • Registries • Prospective cohorts/Trials • Other?? • E.g. video audit (Fitzgerald, Trauma reception project, Arch Surg 2011)

  15. Random Chart audits • Can be used for specific projects • Check compliance • Less value for benchmarking • Bias • Bias of hindsight very hard to avoid.. • Not rigorous data collection • High staff cost…. • Good for identifying issues • Don’t know if you have improved treatment • Or how you compare • The Harvard (Brennan) and Australian (Wilson) studies on medical error • Limited usefulness for comparisons

  16. Routinely collected data • Convenient • Cheap • But • What if it is misleading?

  17. How has routinely collected data been used • Usually based on admissions data • HSMR • Largely discredited at a hospital level • Clinical Indicators • Little validation and little credibility • Presently being implemented in Australia…… • Screening • Limited event screening • eg return to theatre • eg readmission • Low mortality DRGs • May cause more work than benefit • Population monitoring • Can be useful for overall trends • But beware

  18. Routinely collected data • Can be made more useful with linkage • E.g. lab data • E.g. Deaths registry • Triangulation • Data interpretation • Privacy

  19. Incident monitoring • Good for identifying issues • Should not be used for quantitative data • Numbers depend on degree that events are reported • NOT on absolute frequency….. • Sentinel events? • Not a way to measure health systems • Rare and by definition extreme • Role of RCAs??

  20. Mortality reviews • Preventable deaths • Good for identifying issues • Should not be used quantitatively • As in trauma…. • Death audits – shown to engage clinicians • But may divert attention from common and impt issues • Remember - Most “bad” medicine does not result in death

  21. Data collected for purpose?

  22. Registries • In Australia intending to commence a national portal for clinical quality registries • Registries important to measure whether processes and outcomes are improving

  23. Background and Rationale How a Registry works Hospital 1 Governance process Hospital 2 Central Data Collation identical collection methods identical definitions Systematic Outcome Assessment Hospital 3 Quality control Hospital 4 etc

  24. Application of Clinical Registries • Expensive to develop & maintain • Principal rationale is outcome improvement • Limit to • high-cost high-significance procedure • known variation in outcomes or practice • economic case for improved outcomes • In practice • clinical procedures • rare and/or acute illness • drugs & devices

  25. Value of Clinical Registries • Ancillary • credentialing • compliance with guidelines • facilitates clinical research, clinical trials • If population based • access to care • monitoring disease incidence or trends in practice • Fundamental • information to improve outcomes especially: • identification & exploration of clinical variation • benchmarking & quality improvement • long-term safety monitoring (drugs and devices)

  26. Benefits from a Registry • Monitoring • Access to care • Appropriateness of pre & post-hospital care • Quality of care • Benchmarking • Improve outcomes by stimulating competition • Identify variation in outcome (& explore ways to improve) • Safety • Determine medium and long-term safety of new procedures and devices • Cost Benefits • Reduction in costs associated with morbidity and mortality • Platform for Research

  27. The Economic Case • Cardiac angioplasty and stenting • Known variation in outcome • Poor results may lead to death and chronic disability (cardiac failure, angina) • Registry provides the capacity to benchmark and improve outcomes • Renal transplantation • Poor outcomes with transplantation lead to increased dialysis • Cost difference: transplantation: $10K p.a., dialysis $50K p.a. • ANZDATA registry has contributed substantially to improved outcomes of renal transplantation

  28. Victorian Trauma system • Appropriate patients to appropriate hospitals • 30% reduction in in-hospital mortality over 5 years

  29. Key Features of Registry Data Registries are a data spine • Minimal • Epidemiologically sound • Prospective • ‘All or none’ i.e. no cherry-picking • Linkable* • Identifiable* * When needed for determination of delayed outcomes Registries are a data spine: additional data may be sought from limited samples over limited time for specific additional studies to answer specific questions

  30. Engaging the clinical community • Need to demonstrate the registry’s ability to improve patient care • Publish methodology/findings • Where possible, integrate into clinical practice • Good governance structures • Data should be reversibly anonymised • Incentives • Funded forums to discuss outcomes/research stemming from registries

  31. Other Credible Data • Registries cover a small part of our practice • Trauma/cardiac/stroke….. • Systematically collected audits? • Very important for generic issues such as Pain relief, hand washing etc

  32. Conclusion • Qualitative data sources useful for driving change • Mortality reviews • Incident reports • Sentinel events • Certain processes can be measured using routine data sources • Accurate risk adjusted outcomes • Measuring improvements over time • Requires data collected for purpose and interpreted for purpose

  33. Conclusion • Depending on data sources and resourcing • Flexible approach needed • Basic structures must be in place • Standard process measures must be agreed • Improvements in Risk adjusted outcomes should be the goal! • This must be done within a framework and according to basic principles to enable comparisons

More Related