1 / 32

Document S tandards Faced and Lessons Learned

Document S tandards Faced and Lessons Learned. Calvin Beebe, Tom Oniki , Hongfang Liu, Kyle Marchant. Data Normalization Process. Adopt a hybrid agile process: combining both top-down and bottom-up approaches Identify gaps through normalizing real EMR data

emmly
Download Presentation

Document S tandards Faced and Lessons Learned

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Document Standards Faced and Lessons Learned Calvin Beebe, Tom Oniki, Hongfang Liu, Kyle Marchant

  2. Data Normalization Process Adopt a hybrid agile process: combining both top-down and bottom-up approaches • Identify gaps through normalizing real EMR data • Modify components to close the gaps • Evaluate the CEM results • Iteratively improve

  3. Overview • How the real EMR data looks like • by Calvin Beebe • Modeling - Lessons learned • by Tom Oniki • Process – Lessons learned • by Hongfang Liu • Database – Lessons learned • by Kyle Marchant

  4. Overview • How the real EMR data looks like • by Calvin Beebe • Modeling - Lessons learned • by Tom Oniki • Process – Lessons learned • by Hongfang Liu • Database – Lessons learned • by Kyle Marchant

  5. Source Inputs • HL7 V2.x - Pharmacy Orders • CDA R1 - Clinical Documents • CCD R1 - Continuity of Care Documents

  6. HL7 2.x Encoding Rules • Message formats prescribed in the HL7 encoding rules consist of data fields that are of variable length and separated by a field separator character. • Rules describe how the various data types are encoded within a field and when an individual field may be repeated. • The data fields are combined into logical grouping called segments. • Each segment begins with a three-character literal value that identifies it within the message. • Segments may be defined as required or optional and may be permitted to repeat. • Individual data fields are found in the messages by their position within their associated segments.

  7. HL7 2.x Pharmacy Encoded Order MSH|^~\&||116|||201105181535||RDE^O01|20110518153507018|P|2.2 PID|||87654321^^^^MPIID||PATIENT^OUR^TEST||1955-11-26.00:00:00|M||W|200 First St.^^Anytown^MN^55905||||||| PV1||E|DSCH^DSCH|2||4321|6789^Doctor^Best^D.|8888^PHYSICIAN^NOT^CHOSEN|^^^~^^^~^^^~^^^|ERS||||1|3|N|6789^Doctor^Best^D.|I|87654321|FRANKLIN BLUE CROSS||||||||||||||||3|||AV|||||201105032316|201105081600 |||||||5432^DOCTOR^NEXT^J ORC|XO|5|||CURRENT||||201105181535|EHRX-987654321|987654321 |6789^Doctor^Best^D.^|MS~$V752 RXE|^BID &0800,1730^^201105181730|070008003001^METFORMIN(GLUCOPHAGE), TABLET (1000 MG)|1000||MG|TABLET||||0||||987654321|||||||||||||^HOLD X 48 HOURS AFTER CT 3/18 RXR|ORAL ZHX||||||||||||||||||O|201105181530 ZRX|||0|||||0 Z- Segments are site-specific. Sample

  8. Segments in Sample Reference Chapters 2, 3 & 4 in the HL7 Version 2.x Standards

  9. Patient Context in CDA Documents Both CDA R1 & R2 support patient identification within the header of the document.

  10. CDA Narrative Documents CDA narrative documents, support structured headers, with section narrative content. • As can be seen below the Medications Section is based on a simple HTML like syntax.

  11. CDA Narrative Content • Both CDA R1 & CDA R2 support and actually require narrative content. • Both support Section codes, which can be used to identify sections of interest. • CDA R1 only contain narrative, while CDA R2 Documents may contain clinical statements based upon RIM Classes. • For those documents with narrative sections, content normalization requires the used of Natural Language Processing.

  12. CCD Document – Medication Entry Template IDs specifying constraints

  13. CCD Document – Medication Entry continued Supply reference with quantity dispensed.

  14. Summary • Various sources were utilized to obtain Medication information. • Legacy application still leverage HL7 2.x messages to convey order information between systems. • CDA narrative and structured documents are coming on-line which convey snapshots of patient’s current state and medications.

  15. Overview • How the real EMR data looks like • by Calvin Beebe • Modeling - Lessons learned • by Tom Oniki • Process – Lessons learned • by Hongfang Liu • Database – Lessons learned • by Kyle Marchant

  16. Lessons Learned - Modeling • Open tools would be a great contribution to the interoperability. Examples: • mapping terminology, e.g., local codes to LOINC/HL7/SNOMED • mapping models, e.g., HL7 messages/CDA documents to CEMs, CEMs to ADL, etc. • generating sample instances • communicating information • browsers • generating documentation

  17. Lessons Learned - Modeling • Documentation is essential – we didn’t produce enough of it. But . . .

  18. Lessons Learned - Modeling • Documentation is essential – we didn’t produce enough of it. But . . . • It’s hard to communicate (verbally or in written word) the intricacies and complexities needed to make an effort like this work (much less keep information up-to-date)

  19. Lessons Learned - Modeling • “One model fits all” won’t work • Clinical Trials (e.g., CDISC CSHARE) vs Secondary Use (e.g., SHARPn) • Proprietary EMR (e.g., GE Qualibria) and Open Secondary Use (e.g., SHARPn) • value set differences

  20. Lessons Learned - Modeling • The root of all modeling questions: Precoordination vs. postcoordination and what to store in the model instance vs. leave in the terminology • Clinical drug vs. drug name/form/strength/ route • LOINC code vs lab test/method • Display names • Drug classes

  21. Overview • How the real EMR data looks like • by Calvin Beebe • Modeling - Lessons learned • by Tom Oniki • Process – Lessons learned • by Hongfang Liu • Database – Lessons learned • by Kyle Marchant

  22. Lessons Learned - Process • The design of the pipeline needs to be flexible enough to accommodate all kinds of changes – (agile) • UIMA is a nice architecture • Configurable • Model-driven • e.g., taking an XSD specification of CEM and translating into UIMA types • Seamless integration with NLP pipeline

  23. Lessons Learned - Process • Diverse input formats • Structured ­- semantics may be different from different institutions • Needs to understand the data • Unstructured - there is a gap between the semantics of free text and the semantics of standards • Semantics in free text may be at coarse granularity level or can be paraphrased into different expressions

  24. Lessons Learned - Process • Different requirements for different use cases - All normalization tasks but the necessary fields can be different for different use cases • Medication Rec (ClinicalDrug - critical) • Phenotyping (Gender, Race, AdministrativeDiagnosis, Lab, …)

  25. Lessons Learned - Process • Too many standards to choose when implementing HL7 standards • Mapping from local codes to standard value sets – non-trivial • Versioning of standards is crucial • Do not assume the mapping will be trivial if the EMR data has already adopted the same standard as SHARPN value sets

  26. Lessons Learned - Process • Different granularities between CEMs and original structures • Dose Strength “50-mg” • NotedDrug CEM: Unit=MG Value=50 • Inference • TakenDoseUpperLimit needs to be inferred from TakenDoseLowerLimit

  27. Overview • How the real EMR data looks like • by Calvin Beebe • Modeling - Lessons learned • by Tom Oniki • Process – Lessons learned • by Hongfang Liu • Database – Lessons learned • by Kyle Marchant

  28. Database – CEM to DB mapping • Relational structure for the Demographics data worked well • This provided a nice view into the patients data without having to have a lot of knowledge of the Patient CEM structure. • Allowed for both adds and updates • Was not intended to be a full MPI but does meet the minimal need of linking a given patients records for a given institution.

  29. Database – CEM to DB mapping • XML Sample data for the Clinical CEMs • XML samples proved very valuable for validation against the XSD's and for providing an initial set of test messages to Channels. • The more complete these records were the more useful. • Assumed all mappings to code sets were done prior to receiving on the CEM to DB channel.

  30. Database – CEM to DB mapping • Code re-use across the Clinical CEM Channels proved very useful • Standardized CEM DB Structure - IndexData, SourceData, and PatientData • Common patient “matching” code used • Currently only supports Add but Updates now possible due to SourceSystemID that was added recently

  31. Database – CEM to DB mapping • Issues and Challenges • Date formatting was one example of needing to understand how the data was being received and used. • Field level storage VS storing of full XML - Tradeoff - Decided to always store full XML – May need to look at additional relational fields on clinical CEMs for better searching support

  32. Database – CEM to DB mapping • A few Miscellaneous Items • Mirth support for XML, HL7, etc proved very useful for traversal of structures in code and for field validations. • Supporting Add and Updated for Patient (base) record was useful. • Always create the Patient base record first regardless if Patient or Clinical CEM was received as first CEM to the DB.

More Related