1 / 1

Abstract:

Multi-site Validation of an Emergency Ultrasound Image Rating Scale—A Pilot Study Samuel H. F. Lam, MD, RDMS 1 , Mila Felder, MD, Christopher Kerwin, MD 1 , P. John Konicki, DO, RDMS 1 , Martha Villalba, MD, RDMS 1 , Michael Lambert, MD, RDMS 1 , Rebecca Roberts, MD 2

garry
Download Presentation

Abstract:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-site Validation of an Emergency Ultrasound Image Rating Scale—A Pilot Study Samuel H. F. Lam, MD, RDMS1, Mila Felder, MD, Christopher Kerwin, MD1, P. John Konicki, DO, RDMS1, Martha Villalba, MD, RDMS1, Michael Lambert, MD, RDMS1, Rebecca Roberts, MD2 1Department of Emergency Medicine, Advocate Christ Medical Center and 2Department of Emergency Medicine, Cook County (Stroger) Medical Center Methods: Abstract: Results: • .This was a prospective, multi-site educational study evaluating the agreement among expert image • reviewers applying an URS developed by 2 of the investigators (SL,ML). The study was approved • by the Advocate Health Care Institutional Review Board • Our URS consisted of 3 components: anatomical landmarks, image quality, and image annotations • with numerical values assigned to each as a measurement tool (Fig. 1). • Gallbladder images was selected as a test case.20 BUS patient portfolios (consisting of both still • images and clips) were selected from the Advocate Christ Medical Center Emergency Department • BUS archive. Anonymized, numbered images along with detailed rating instructions were then • forwarded electronically to individual image reviewers throughout the US. • Included were 3 institutions on the East Coast, 2 in the Midwest, and 1 on the West Coast, all with existing emergency medicine residencies and emergency ultrasound fellowships • All image reviewers were expert emergency ultrasonographers and qualified to review BUS images • at their respective emergency departments/ institutions. • The reviewers were asked to rate the ultrasound images according to our devised URS, and to • indicate whether the submitted images would facilitate decision-making in the clinical setting. • All reviewers’ responses were blinded to investigators, and data were analyzed anonymously. Table 1. Kendall’s coefficients of correlation All raters Landmarks 0.55 Image Quality 0.57 Annotations 0.26 Total Score 0.63 Clinical Usefulness 0.45 Attendings (n=13) Landmarks 0.64 Image Quality 0.56 Annotations 0.25 Total Score 0.69 Clinical Usefulness 0.48 Fellows (n=3) Landmarks 0.60 Image Quality 0.73 Annotations 0.44 Total Score 0.67 Clinical Usefulness 0.62 Table 2. Spearman’s correlation coefficients between the Clinical Usefulness and scoring elements Landmarks: 0.62 Image Quality: 0.50 Annotation: 0.40 Total Score: 0.66 Background: Bedside ultrasound (BUS) is commonly performed to facilitate emergency department patient care. Currently, BUS competency is largely defined by number of exams completed. We hypothesize that quantifying BUS image attibutes is a more accurate indicator of BUS competency. To date there has been no widely accepted standard on measuring BUS image attributes. Objectives: To introduce and report preliminary testing of a 3-component, 8-point BUS rating scale (URS) Methods: Gallbladder BUS was selected as the test case. Twenty deidentified BUS image sets were forwarded electronically to 16 reviewers at 6 U.S. training sites. Each rated the BUS sets using the pilot URS. The URS rated “Landmarks (L)” from 1-5, “Image Quality (Q)” from 1-3, “Annotation (A)” from 1-2, for a “Total (T)” score range of 3-10. Raters also decided whether each BUS set would be “Clinically Useful” (yes or no) (U). Results: Among 13 faculty raters, experience averaged 7.8 years and 60 images reviewed per week. Among all 16 raters, the mean scores were 2.93 (L), 2.12 (Q), 1.62 (A), and 6.68 (T) respectively. Kendall’s correlation coefficients were 0.55 (L), 0.57 (Q); 0.26 (A), 0.63 (T), and 0.45 (U). All URS elements correlated significantly with Clinical Usefulness (p < 0.001). The Spearman’s correlation coefficients between the Clinical Usefulness and scoring elements were: 0.62 (L), 0.50 (Q), 0.40 (A), and 0.66 (T). The correlation coefficient between each reviewer and the entire group ranged from 0.31 to 0.69. Conclusion: These results suggest that development of a valid URS is feasible. The higher correlation for Landmarks and Total Scores may be an artifact of the wider scale ranges or the more explicit training for landmarks. Next steps: raising the scale ranges to remove difficulties with only 2-3 choices, expand URS training, and add organ systems. Fig.2. Mean Spearman Correlation with Other Reviewers by Experience Fig.1. Pilot Ultrasound Rating Scale (URS) Spearman Corr Coeff w/ Other Reviewers Introduction: Landmarks 1- Unclear anatomical location. Landmarks for applications absent 2- Anatomical location identifiable. Landmarks for application inadequate to identify potential pathology 3- Anatomical location adequate. Landmarks for application adequate. Additional views/ details desirable but unlikely to compromise diagnostic accuracy 4- Anatomical location evident. Landmarks for application evident. Additional views/ details unnecessary 5- Anatomical location clearly evident. Landmarks for application clearly evident. Clear views/ details Image Quality 1- Poor overall image gain, contrast, resolution, and depth 2- Adequate overall image gain, contrast, resolution, and depth 3- Optimal overall image gain, contrast, resolution, and depth Annotations 1- Insufficient--image itself cannot clarify application, or text annotation lacking 2- Sufficient-- image itself clarifies application, or text annotations supportive Total possible score: 5 + 3 + 2 = 10 (range 3-10) Bedside ultrasound (BUS) is commonly utilized by emergency physicians to expedite patient care. It is also an essential component of the US emergency medicine residency training core curriculum1-3, 6. Currently, BUS competency is largely defined by the number of images acquired and submitted for review by a trainee or practicing physician 4-5, 7. This has also been adopted by many emergency departments throughout the US as a standard for credentialing and granting of BUS privileges. While this “fixed number” approach to BUS competency was forged by expert consensus and is easy to measure and implement, it has never been validated in a large scale. Defining competency based on number of expert reviewed images alone also seems rigid from an educational point of view, because it does not take into account individual variation in skill acquisition and clinical utility of the acquired images. We hypothesize that measuring BUS image attributes is a more accurate indicator of BUS competency. In our opinion, a technically adequate BUS study should entail adequate anatomical landmarks, good image quality, and adequate views with image annotations, in this order of importance. It should also help facilitate decision-making in the clinical setting. Based on these assumptions, we developed a 3-component, 8-point BUS image rating scale (URS). The current study reports preliminary testing of this URS at training sites across the U.S. Experience (years) Conclusions: • Our results suggest that development of a valid URS is feasible. If validated on a large scale, our URS has the potential to be incorporated into many future research projects, and can have significant potential impact in the field of emergency medicine in terms of BUS quality assurance, competency, and education • The higher correlation for Landmarks and Total Scores may be an artifact of the wider scale ranges or the more explicit training for Landmarks • Based on our pilot study results, we plan to modify the URS to expand the range of score in each subcategory, in order to remove the difficulty of having only 2-3 choices in some cases • For the next phase of the study, we will expand the reviewer panel, and include BUS of other anatomical areas. In addition, preparations are being made for a future study that will track trainee progress on the modified URS over time, in order to further verify the content validity of our scale Results: • Sixteen reviewers (13 attending physicians, 3 fellows) took part in the study • Among attending physician reviewers, experience ranged from 2-15 years, 5-300 scans per week, and • averaged 7.8 years and 60 images • Among all 16 raters, the mean scores for Landmarks, Image Quality, and Annotation were 2.93, 2.12, • and 1.62, respectively. The mean total score for all 20 images for 16 reviewers was 6.68; individual • average range was 4.38 – 8.38. • All URS elements correlated significantly with Clinical Usefulness (P < 0.001). • The Spearman Correlation Coefficient between each reviewer and the entire group ranged from 0.31 • to 0.66 in the fellow group and from 0.48-0.69 in the attending group (Fig. 2). The majority of the • attendings fall within the 0.6-0.7 range, irrespective of BUS image review experience. References: 1. Moore CL, Gregg S, Lambert M. Performance, training, quality assurance, and reimbursement of emergency physician-performed ultrasonography at academic medical centers. J Ultrasound Med 2004 Apr; 23(4): 459-66 2. Counselman FL et al. The status of bedside ultrasonography training in emergency medicine residency programs. Acad Emerg Med 2003; 10(1): 37-42 3. Witting MD, Eurle BD, Butler KH. A comparison of emergency medicine ultrasound training with guidelines of the Society for Academic Emergency Medicine. Ann Emerg Med 1999; 34(5): 604-609 4. American College of Emergency Physicians. ACEP emergency ultrasound guidelines--2008. Ann Emerg Med 2009; 53 (4): 550-570 5. Mateer J, Plummer D et al : Model curriculum for physician training in emergency ultrasonography. Ann Emerg Med 1994; 23: 95-102 6. Hockberger RS. Core Content Task Force II. The model of the clinical practice of emergency medicine. Acad Emerg Med 2001; 8: 860-81 7. American Institute of Ultrasound in Medicine. Training guidelines for physicians who evaluate and interpret diagnostic ultrasound examinations. Available at http://www.aium.org/Publications/Statements.aspx accessed 10/20/2010

More Related