1 / 51

USING GUIDELINES TO LEVERAGE ACTION AND IMPROVE QUALITY

All kt questions can be answered by using a guideline paradigm. USING GUIDELINES TO LEVERAGE ACTION AND IMPROVE QUALITY. Hurray for guidelines. Practice guidelines: An exemplar of all that is good and effective and promising in kt. Guidelines, the cure for the ills of the health care system.

arocho
Download Presentation

USING GUIDELINES TO LEVERAGE ACTION AND IMPROVE QUALITY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. All kt questions can be answered by using a guideline paradigm USING GUIDELINES TO LEVERAGE ACTION AND IMPROVE QUALITY Hurray for guidelines Practice guidelines: An exemplar of all that is good and effective and promising in kt Guidelines, the cure for the ills of the health care system

  2. Objectives Illustrate the role of guidelines as a KT model and a tool in quality improvement. Advancements in the science of guidelines …facto advancements in KT …and de facto (again) advancements to improve quality in health AGREE II

  3. Practice guidelines…original ~ Clinical Practice Guidelines (CPGs) ~ systematically developed statements to assist provider and patient decisions about appropriate health care for specific clinical circumstances systematic vs. arbitrary statements to assist enabling vs. dictates or formulas range of stakeholders

  4. Practice guidelines…contemporary Systematically developed statements, informed by research evidence, values and local/regional circumstances to assist fair decisions and judgments about [health care] at the clinical, management and policy levels. Browman, Brouwers, Fervers, and Sawka, 2009

  5. What can we ask? How can we help health care providers make better care decisions with their patients? How can we help administrators and clinical managers make better decisions regarding how care is organized in their center, hospital or region? How can we help government make better decisions that will ensure the most effective care options and strategies are available to a population?

  6. KNOWLEDGE CREATION Knowledge Inquiry Tailoring Knowledge Synthesis Products/ Tools Monitor Knowledge Use Select, Tailor, Implement Interventions Evaluate Outcomes Assess Barriers to Knowledge Use Sustain Knowledge Use Adapt Knowledge to Local Context Identify Problem Identify, Review, Select Knowledge

  7. KNOWLEDGE CREATION Knowledge Inquiry Tailoring Knowledge Synthesis Products/ Tools Monitor Knowledge Use Select, Tailor, Implement Interventions Evaluate Outcomes Assess Barriers to Knowledge Use Sustain Knowledge Use CPGs Adapt Knowledge to Local Context Identify Problem Identify, Review, Select Knowledge

  8. Think innovatively… guidelines are not simply things or products guideline enterprise is not linear philosophy, methods and processes fully integrate key KT principles creators of knowledge=users of knowledge development as an implementation strategy

  9. Philosophy An Evidence-based Approach Incorporating Consensus from Experts ~ an integrated philosophy ~ provides a formal and explicit methodology ensures appropriate content experts are at the table processes and opportunities to interpret the evidence and judge its applicability in specific clinical situations, systems, and contexts social and scientific engagement = KT in ACTION

  10. Philosophy in action creating cultures, fostering communities of practice, and enhancing capacity ….. receptive to evidence ….. understand evidence ….. apply evidence to improve cancer control Practice guidelines are the points around which the culture can evolve and be sustained.

  11. Philosophy in action Aim to have users of knowledge be the developers of knowledge enhance capacity normative model contextualize evidence properly increase acceptance and adoption of recommendations accountability

  12. PEBC: Evidence-based Advice Cycle Evidence-based Advice Panel topic selection: explicit question method selection: systematic review, environmental scan, adaptation, consensus • draft report: • evidence • expert interpretation and consensus • draft reccs monitoring updating post-development implementation updated reccs • external review: • clinicians • administrators • system & policy dissemination publication • final report: • evidence • expert interpretation and consensus • description of external review • final reccs

  13. KNOWLEDGE CREATION Knowledge Inquiry Tailoring Knowledge Synthesis Products/ Tools Monitor Knowledge Use Select, Tailor, Implement Interventions Evaluate Outcomes Assess Barriers to Knowledge Use Sustain Knowledge Use Adapt Knowledge to Local Context Identify Problem Identify, Review, Select Knowledge

  14. PEBC: Evidence-based Advice Cycle Identify Problem Identify, Review, Select Knowledge Evidence-based Advice Panel topic selection: explicit question method selection: systematic review, environmental scan, adaptation, consensus • draft report: • evidence • expert interpretation and consensus • draft reccs monitoring updating post-development implementation Adapt Knowledge to Local Context Select, Tailor, Implement Interventions Assess Barriers to Knowledge Use Monitor Knowledge Use Evaluate updated reccs • external review: • clinicians • administrators • system & policy dissemination publication • final report: • evidence • expert interpretation and consensus • description of external review • final reccs

  15. Key take home messages…so far guidelines are a means to an end and not an end to themselves think of the larger KT agenda use philosophy, process and social engagement as a strategy to facilitate dissemination and application users of knowledge = creators of knowledge

  16. I know what you are saying to yourself: Melissa, I am convinced but… What do we actually need to do to create a guideline of quality that integrates these principles? How should we communicate this? How can we distinguish between good and poorly developed guidelines?

  17. The AGREE Enterprise

  18. The AGREE Enterprise • AGREE Collaboration: a tool to evaluation clinical practice guidelines • ‘Quality of clinical guidelines’ is the confidence that: • The potential biases of guideline development have been addressed adequately • 2. The recommendations are both internally and externally valid, and are feasible for practice • reporting of information

  19. The AGREE Enterprise • Methods • Definition of quality of reporting • Theoretically driven • Beta I version, tested and refined • Beta II version, tested and refined • Deliverable: • AGREE Version 1.0 + Training Manual

  20. The AGREE Enterprise • 6 domains • 23 items • 4-point response scale • Scope and purpose • Stakeholder involvement • Rigour of development • Clarity of presentation • Applicability • Editorial independence

  21. The AGREE Enterprise • 6 domains • 23 items • 4-point response scale • Scope and purpose • Stakeholder involvement • Rigour of development • Clarity of presentation • Applicability • Editorial independence

  22. The AGREE Enterprise • 6 domains • 23 items • 4-point response scale • Scope and purpose • Stakeholder involvement • Rigour of development • Clarity of presentation • Applicability • Editorial independence

  23. The AGREE Enterprise • Understanding perspectives of various key stakeholders • Improve measurement properties • Rating scale • Can the AGREE discriminate between guidelines of varying quality • Is there a role for a short scale? • What is the impact of using the AGREE on global ratings of CPGs?

  24. AGREE Next Steps Research Consortium 1Melissa Brouwers, 2George Browman, 3Jako Burgers, 4Francoise Cluzeau, 5Dave Davis, 6Gene Feder, 7Béatrice Fervers, 8Ian Graham, 9Jeremy Grimshaw, 1Steven Hanna, 1Michelle Kho, 10Peter Littlejohns, 1Julie Makarski, 11Louise Zitzelsberger 1 McMaster University, Hamilton, ON Canada 2 British Columbia Cancer Agency, Victoria, BC Canada 3 Dutch Institute for Healthcare Improvement, Netherlands 4 St. George’s University of London UK 5 Association of American Medical Colleges, Washington, DC United States 6 Bart’s and the London, Queen Mary’s School of Medicine and Dentistry, London UK 7 Fédération Nationale des Centres de Lutte Contre le Cancer 8 Canadian Institutes of Health Research, Ottawa, ON Canada 9 Ottawa Health Research Institute, Ottawa, ON Canada 10 National Institute for Health and Clinical Excellence, London UK 11 Canadian Partnership Against Cancer

  25. STUDY 1 • TOOLS • MODIFIED AGREE INSTRUMENT • or the M-AGREE • Original AGREE items • New 7-point response scale • Original Supporting Documentation (Training Manual and User’s Guide) • GLOBAL RATING SCALE (GRS: NEW instrument) • OUTCOME MEASURES • OBJECTIVES • Apply Instruments (performance) • Assess Instruments’: • usefulness • feasibility • areas for improvement • Compare performance of M-AGREE to GRS • Find evidence to favor design of tailored abridged versions • Explore reliability • STUDY 2 • TOOLS • AGREE II BETA VERSION • or the B-AGREE II • Original AGREE items • 7-point response scale • Revised Supporting Documentation (New User’s Manual) • OUTCOME MEASURES • OBJECTIVES • Validity of items • Usefulness of User’s Manual instructions: • appropriate • easy to apply • facilitate distinguishing low and high quality • 2003 • Original AGREE Instrument • 23 items • 6 domains • 4-point response scale AGREE II Overview of Research Program

  26. Overview of Research Conditions

  27. Researchers/ Developers Recruitment goal: 192 CPG Clinicians Policy 16 16 A A B B Cancer Cancer 1 C C 40 40 D D 16 E E F F Cardiovascular Cardiovascular G G 8 H H I I R Critical Care Critical Care J J 16 16 2 16 8 80 80 32

  28. Results 1 – Profiling M-AGREE USEFULNESS Items were rated as useful. No differences in the RATINGS as a function of user type. Significant differences in the RANKS as a function of user type. Recommendations to modify all items and domains. Recommendations to delete 10 items.

  29. Results 1 – Profiling M-AGREE • PERFORMANCE • Significant differences in M-AGREE item ratings as a function of user type. • No significant differences in outcome measures as a function of user type. • Predicting outcomes (recommend, use, overall quality) • Domains (except EI) significant predictors • User type not significant predictors • User type X domain interactions not significant predictors

  30. Results 2 – Profiling the GRS USEFULNESS Items were rated as useful. No differences in the RATINGS as a function of user. Significant differences in the RANKS as a function of user type. Recommended modifications to all items.

  31. Results 2 – Profiling the GRS • PERFORMANCE • Only 1 significant difference in GRS ratings as a function of user. • No significant differences in outcome measures as a function of user type. • Predicting outcomes (recommend, use, overall quality) • Items significant predictors • User type not significant predictors • User type X item interactions not significant predictors

  32. Results 3 – GRS vs. M-AGREE GRS + M-AGREE vs. GRS Impact on GRS usefulness scores? GRS item quality ratings? Outcome measures? No differences emerged with any comparison.

  33. Results 3 – GRS vs. M-AGREE Relationship between GRS, M-AGREE and the outcomes measures. Correlation range of M-AGREE Total and Outcome Measures: 0.57 to 0.77 Correlation range of GRS Total and Outcome Measures: 0.74 to 0.99 Correlation between GRS and M-AGREE Scores = 0.74

  34. Results 4 – M-AGREE Measurement Properties

  35. Interpretation 1 USEFULNESS Items and domains from both instruments assessed as useful. No evidence to direct the development of tailored abridged versions of the M-AGREE – that agenda not pursued. GRS qualitative feedback - provide more details like M-AGREE

  36. Interpretation 1 PERFORMANCE M-AGREE more sensitive than GRS at differentiating guideline quality as a function of user type. HOWEVER, both effective at predicting outcomes linked to guideline adoption and application. M-AGREE does not influence perceptions or performance of GRS.

  37. Interpretation 2 M-AGREE M-AGREE items and domains useful. M-AGREE domains predict important outcome measures associated with implementation. Internal consistency of domains using 7-point scale is good. Inter-rater reliability using 7-point scale is promising. LOTS of feedback provided to improve items and instructions. M-AGREE continues to have a lead place in evaluation but also development and reporting standards.

  38. Interpretation 2 GRS GRS items useful. GRS items predict important outcome measures. GRS likely has a place for “quick” evaluation.

  39. STUDY 1 • TOOLS • MODIFIED AGREE INSTRUMENT • or the M-AGREE • Original AGREE items • New 7-point response scale • Original Supporting Documentation (Training Manual and User’s Guide) • GLOBAL RATING SCALE (GRS: NEW instrument) • OUTCOME MEASURES • OBJECTIVES • Apply Instruments (performance) • Assess Instruments’: • Usefulness • feasibility • areas for improvement • Compare performance of M-AGREE to GRS • Find evidence to favor design of tailored abridged versions • Explore reliability • STUDY 2 • TOOLS • AGREE II BETA VERSION • or the B-AGREE II • Original AGREE items • 7-point response scale • Revised Supporting Documentation (New User’s Manual) • OUTCOME MEASURES • OBJECTIVES • Validity of items • Usefulness of User’s Manual instructions: • appropriate • easy to apply • facilitate distinguishing low and high quality • 2003 • Original AGREE Instrument • 4-point response scale AGREE II Overview of Research Program

  40. Rationale PREVIOUSLY Researchers chose guidelines believed to reflect range of quality. AGREE instrument served as measurement tool AND object of study. METHODOLOGICAL CONFOUNDING differences could be attributed to: differences in guideline topic, intervention, organization differences in criteria used to select candidate guidelines OVERCOME CONFOUNDS purposefully design high and low quality guideline content

  41. Overview of Research Conditions

  42. Results 1 MULTIVARIATE ANALYSIS OF VARIANCEsignificant main effect for quality excerpts designed to be of high quality rated more favorably than excerpts designed to be of low quality UNIVARIATE ANALYSESexcerpts designed to be of high quality rated more favorably than excerpts designed to be of low quality statistically significant differences for 18 of the 21 comparisons ITEMS NOT MANIPULATEDitems not manipulated – no significant difference between conditions

  43. Results 2 FEEDBACK SURVEY Instructions are appropriate mean range = 5.43 to 6.43 Instructions are easy apply mean range = 5.21 to 6.27 Application of instructions will discriminate between good and poor quality guidelines mean range = 5.21 vs. 6.21 And feedback.

  44. Interpretation B-AGREE As predicted, the AGREE items are able to discriminate between high quality and low quality guideline content. Construct validity established. B-AGREE User’s Manual Well accepted by stakeholders.

  45. From Original AGREE to AGREE II • Data about items. • - usefulness and performance • Data about instrument measurement properties • - reliability with 7-point scale • - construct validity • Data about user’s manual • - Considerable feedback about how to improve, update, and refine items and instructions. • Three rounds of refinement and modification with research team.

  46. AGREE II: Key Differences 1 PURPOSE Use as a tool for guideline development, reporting and evaluation. SCALE 7-point to align with test construction norms and standards. Anchors defined. USER’S MANUAL Complete redesign and restructure – all aspects built in. Description. Where to look. How to rate (criteria and considerations).

  47. AGREE II: Key Differences 2 DOMAINS AND ITEMS 6 domain using same labels in original. 12 items no change. 10 items updated (new language, greater clarify, more precision). 1 item deleted and integrated with a different item as an example. 1 item moved to a different domain. 1 item added to “rigor of development”.

  48. NEW RESEARCHA3: Application, Action, Appropriateness Stream One: Conduct a randomized trial to test different training interventions to facilitate the successful uptake and implementation of AGREE II. Stream Two: AGREE II targets methodological processes related to guideline development and uptake. It does not focus on clinical validity or appropriateness of recommendation. Our objective is to develop a complementary tools that targets this goal.

  49. Final take home messages…AGREE II new standard for guideline development , reporting, and evaluation.Incorporates key elements of K to A cycle.Integrates elements relevant to methodological rigor and social engagement.

More Related