1 / 65

Student :Erez Shalom

An Evaluation of a Methodology for Specification of Clinical Guidelines at Multiple Representation Levels. Student :Erez Shalom. Supervisors: Prof. Yuval Shahar Dr. Meirav Taieb-Maymon. Talk Roadmap :. Background Methods Results Conclusions Future Directions. Clinical Guidelines.

rianne
Download Presentation

Student :Erez Shalom

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Evaluation of a Methodology for Specification of Clinical Guidelines at Multiple Representation Levels Student :Erez Shalom Supervisors:Prof.Yuval Shahar Dr. Meirav Taieb-Maymon

  2. Talk Roadmap : • Background • Methods • Results • Conclusions • Future Directions

  3. Clinical Guidelines • Textual documents describing “state of the art” patient management • A powerful method to standardize and improve the quality of medical care • Usually specify diagnostic and/or therapeutic procedures

  4. Subsections Describing patient diagnosis and treatment

  5. The Need for Automation of Clinical Guidelines • Automatic support provides: • Visual specification • Search and retrieval • Application of a GL • Retrospective quality assurance • However: Most GLs are text based and electronic inaccessible at the point of care

  6. The Required Infrastructure • A machine-comprehensible GL representation ontology (e.g., Asbru ontology) • Runtime GL application and QA tools • A preliminary engine, namely , Spock was already developed in our lab by [Young,2005] • Support for a gradual structuring of the GL (from text to an executable code)

  7. D P P P D The Structuring Process The Guideline as a text document The Guideline as a tree of plans Regimen A In parallel Doxycline Cefotetan Involves 2 main types of knowledge: Procedural knowledge – e.g. Regimen A for administer the two medications in parallel Declarative knowledge - e.g. 2 g IV

  8. Sample GL Modeling Methods

  9. Expert Physician Knowledge Engineer The Hybrid Representation Model Gradually structuring the GL using increasingly formal representation levels • Implemented as part of the Digital Electronic Guideline Library) DeGeL) • Used within the URUZ GL markup tool

  10. Asbru- the Underlying Guideline-Representation Ontology Includes semantic Knowledge Roles (KRs) organized in KR-Classes such as: • Conditions KR-Class (e.g., the filtercondition, and the abort condition) • Plan-body KR-Class for the GL’s Control structures (e.g., sequential, concurrent, and repeating combinations of actions or sub-guidelines), • GL’sGoals KR-Class (e.g. process and outcome intentions), • Context KR-Class of the activities in the GL (e.g. actors, clinical-context).

  11. Expert physician Selects “filter condition” knowledge role URUZ (I): Specification of declarative knowledge

  12. URUZ (cont’d) :Specification of Procedural Knowledge Expert physician decomposing the GL into tree of plans

  13. GL Specification: Core Issues • Expert Physicians (EPs) - Knowledge Engineers (KEs) collaboration • Incremental Specification • Treatment of Multiple Ontologies • Distributed Collaboration and Sharing • Text Based Source • Knowledge Conversion

  14. Several unresolved issues: • Definition of the necessary steps for the GL specification process • Use optimally of EPs and KEs in the process • Evaluation is crucial for quantify the markups quality • To Achieve high quality of markups there is a Need for: • An overall process of guideline specification • A complete evaluation methodology

  15. Talk Roadmap • Background • Methods • Results • Conclusions • Future Directions

  16. The Overall Process of Guideline Specification The activities in the markup process include three main phases : 1) Preparations before the markup activities 2) Actions during the Markup activities 3) After Markup activity

  17. A Methodology Specification of Clinical Guidelines Creating a consensus is a crucial, mandatory step before markup

  18. The Importance of Using an Ontology-Specific Consensus (OSC) • An OSC is a structural document that describes schematically the clinical directives of the GL • Described by the semantic of the specification ontology • Prevent disagreement and a great deal of variability among the EPs

  19. Methodology for Creation of OSC • The OSC is created in a iterative fashion by performing the following steps • First, we create a preliminary structure of the clinical pathway • The KE adds procedural, control structure • The KE adds declarative concepts for each defined step • After some iteration of steps 2 and 3, an OSC is formed

  20. The second stage in forming a consensus

  21. Evaluation Design Considered some specific Criteria : • Amount of Expertise • The acquired knowledge domain • The Ontology Specific Consensuses • The Gold Standard markup for each GL • The Markups for each GL • The Evaluation of markups

  22. Evaluation Design (cont’d) • Three GLs in different domains were used : • Pelvic Inflammatory Disease (PID) • Chronic Obstructive Pulmonary Disease (COPD) • Hypothyroidism(HypoThyrd)

  23. Evaluation Design (cont’d)

  24. Research Questions • Is markup feasible by EPs? • Is there a difference between the EPs editing the same GLs , and same EPs editing difference GLs? • Is there a difference between the KRs across all EPs? • Is there a difference in the amount of errors when using different OSC?

  25. Evaluation of markups • Subjective Measures - Questionnaires were administered for finding the EPs attitude regarding the specification process • Objective Measures – intwo scales (compared to the GS): * Completeness of the markup * Correctness of the markup

  26. The Objective Measures - Completeness

  27. The Objective Measures - Correctness • * Clinical Measure (CM)– measure the clinical correctness of the content • * Asbru Semantic Measure (ASM) - measure the semantics correctness of the content ( Asbru semantic in our case)

  28. Resolution of Measure Mean (weighted) Quality Score (MQS) for: • GLs - to find common trends in a GL, and in all GLs • EPs - to find trends in between the markups of the EPs across the same GL and between GLs • KRs - to find trends in a specific KR type and common trends across KRs and KR classes across one markup, GL and in all GLs

  29. The Objective Measures – Types of Errors • General errors classified into two types, and thus into two corresponding scales: • Clinical errors: • Clinical content not accurate • Clinical semantics not well specified • Clinical content not complete. • Asbru semantics errors: • Asbru semantics content not accurate • Asbru semantics content not well specified • The content does not includes mappings to standard terms • The necessary knowledge is not defined in the guideline knowledge when it should be.

  30. The Objective Measures – Types of Errors • Specific errors for each KR Type a specific error, for example : • Conditions /Intentions KRs: • There are no And/Or operators between the different criteria. • Simple Action Plan-Body Type: • Has no text content describing the plan • Has no single atomic action semantics with clear specification and description for the action to be performed. • Plan Activation Plan-Body Type: • Plan name is not defined • Defined plan does not exist in DeGeL.

  31. The Markup-Evaluation Tool

  32. The Markup-Evaluation Tool (Cont’d)

  33. The Markup-Evaluation Tool (Cont’d)

  34. Talk Roadmap • Background • Methods • Results • Conclusions • Future Directions

  35. Results – Subjective Measures • Non significant correlation between results 1 and 2 • Significant correlation between results 3 and 4

  36. Number of specified plans: Measures Summary: Results – Objective measures • Mean Completeness for all markupsof EPs of 91% • All markups of EPshas significant (P<0.05) proportion of scores of 1 higher than 0.33 (some even higher then 0.75) Markup is feasible by EPs

  37. Issue significant (P<0.05) nonsignificant (P>0.05) 1 Difference between the proportions of completeness measure between EPs editing the same GLs √ Difference in correctness measure between EPs editing the same GLs in most GLs (except the Hypo) 2 Difference in Correlation measure between EPs editing the same GLs in most GLs (except the PID) 3 Difference in correctness measure between different GLs editing the same EPs √ 4 • Any EP can perform markup with high completeness • There is wide variability between the EPs in the correctness measure with a range of [0.13,0.58] on a scale of [-1,1] Results –Difference between EPs √ √

  38. Results – Difference between KRs There was significant difference (P<0.05) between homogenous groups of KRs EPs has difficulty to structure procedural KRs than declarative ones

  39. Results – Types of errors The differences in total between the three GLs were highly significant in a proportion test (P<0.001) The more detailed and structured the OSC was, the lower the total number of errors committed by the EPs for each KR

  40. Talk Roadmap • Background • Methods • Results • Conclusions • Future Directions

  41. Four main aspects: • Creation of an Ontology-Specific Consensus (OSC) • The essential aspects needed to learn to support the specification process by EPs • The medical and computational qualifications needed for specification • The characteristics of the KA tool needed for this kind of specification

  42. Creation of an Ontology-Specific Consensus (OSC) • Should be made as detailed as possible, including all relevant procedural and declarative concepts • The OSC is independent of the specification tool • Saving the OSCs in an appropriate digital library for re-using and sharing

  43. The Essential Aspects Needed to Learn to Support the Specification Process by EPs • creating an OSC and performing the markups are two different tasks which require teaching two different aspects • Teaching the “difficult” KRs in particular, the procedural KRs • Short test should be administered before the EPs perform markups • A help manual and a small simulation of marking up a GL should be included in the teaching session

  44. The Medical and Computational Qualifications Needed for Specification • Senior EPs and KEs together should work on the tasks of selecting a GL for specification and making the OSC • Any EP (senior, non-senior or a general physician) can structure the GL's knowledge in a semiformal representation completely • To specify it correctly, a more available EP should be selected, perhaps from among residents, interns or even students

  45. The Characteristics of the KA Tool Needed for This Kind of Specification • A robust, graphical, highly usable framework is needed • More intuitive, graphic, user friendly interfaces should be used for acquiring the “difficult” KRs , especially the procedural ones • Need to bridge the gap between the initial structuring of the EP and the full semantics of the specification language GESHER - A Graphical Framework for Specification of Clinical Guidelines at Multiple Representation Levels

  46. Limitations and Advantages of the research • Small of the number of EPs and GLs, But, in fact, 196 sub-plans and 326 KRs in total were structured by all of the EPs together in all markups • Lack of careful measurement of the required time , but, obtain more realistic results, since the interaction with most of the EPs took place in their own "playground"

  47. Talk Roadmap • Background • Methods • Results • Conclusions • Future Directions

  48. The GESHER’s Main Features • User friendly graphical client application • Support specification at multiple representation levels • Support to multiple specification languages (GL ontologies) • Access centralized resources such as DeGeL and a knowledge base

  49. GESHER: Semi-Structured Level

  50. GESHER(II) :Semi-Formal Widgets

More Related