1 / 17

Automatic Recognition of Surgical Motions Using Statistical Modeling for Capturing Variability

Automatic Recognition of Surgical Motions Using Statistical Modeling for Capturing Variability. Carol E. Reiley 1 Henry C. Lin 1 , Balakrishnan Varadarajan 2 , Balazs Vagvolgyi 1 , Sanjeev Khudanpur 2 , David D. Yuh 3 , Gregory D. Hager 1

tacy
Download Presentation

Automatic Recognition of Surgical Motions Using Statistical Modeling for Capturing Variability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic Recognition of Surgical Motions Using Statistical Modeling for Capturing Variability Carol E. Reiley1 Henry C. Lin1, Balakrishnan Varadarajan2, Balazs Vagvolgyi1, Sanjeev Khudanpur2, David D. Yuh3, Gregory D. Hager1 1Engineering Research Center for Computer-Integrated Surgical Systems and Technology, The Johns Hopkins University 2Center for Speech Language Processing, The Johns Hopkins University 3Division of Cardiac Surgery, The Johns Hopkins Medical Institutions MMVR January 31st, 2008

  2. Introduction • Our Goal • Automatically segment and recognize core surgical motion segments (surgemes) • Capture the variability of a surgeon’s movement techniques using statistical methods

  3. Introduction • Given a surgical task, a single user tends to use similar movement patterns Lin 2005 Miccai

  4. Introduction • Different users demonstrate more variability to complete the same surgical task • Our goal is to identify core surgical motions versus error/unintentional motion

  5. Related Work • Prior work focuses on surgical metrics for skill evaluation • High level (applied force and motion) • Low level (motion data) • Our work aims to automatically identify fundamental motions Low level surgical modeling: MIST-VR High level surgical modeling: University of Washington-Blue Dragon Low level surgical modeling: Imperial College-ICSAD

  6. Our Approach • Surgeme: elementary portions of surgical motion Reaching for needle Positioning Needle Pull Suture with Left Hand

  7. Label Description A End of Trial, Idle Motion Reach for Needle (gripper open) B K Position Needle (holding needle) C Insert Needle/Push Needle Through Tissue D Move to Middle With Needle (left hand) E Move to Middle With Needle (right hand) F Pull Suture With Left Hand G Pull Suture With Right Hand* H Orient Needle With Two Hands I Right Hand Assisting Left While Pulling Suture* J Loosen Up More Suture* Motion Vocabulary *Added based on observed variability of technique

  8. Signal Processing Feature Processing Classification/ Modeling Extraction of Structure Our Approach

  9. Data Collection The da Vinci Surgical Robot System • Recorded parameters at 23 Hz: • (Patient and master side) • Joint angles, velocities • End effector position, velocity, orientation • High-quality stereo vision Courtesy of Intuitive Surgical With the increasing use of robotics in surgical procedures, a new wealth of data is available for analysis.

  10. Subject Medical Training Da Vinci Training Hrs 1 - - 10-15 2 - - 100+ 3 X X 100+ 4 - X 100+ 5 - X <10 6 - X <10 7 - - <1 Experimental Study • Users had varied level of experience • Each user performed five trials • Each trial consisted of a four-throw suturing task

  11. Classification Methods • Linear Discriminant Analysis (LDA) with Single Gaussian • LDA + Gaussian Mixture Model (GMM) • 3-state Hidden Markov Model (HMM) • Maximum Likelihood Linear Regression (MLLR) • Supervised • Unsupervised

  12. Results Percent classifier accuracy (average): • Leave one trial out per user cross-validation • MLLR not applicable

  13. Results • Example classifier to manual segmentation result

  14. Results • We repeated the analysis, this time leaving one user out • Supervised: Surgeme start/stop events manually defined • Unsupervised: Surgeme start/stop events automatically derived

  15. Conclusions • Preliminary results show the potential for identifying core surgical motions • User variability has a significant effect on classification rates • Future work: • Use contextual cues from video data • Filter class decisions (eg. majority vote) to eliminate class jumping • Apply to data from live surgery (eg. Prostatectomy)

  16. Acknowledgements • Intuitive Surgical • Dr. Chris Hasser • This work was supported in part by: • NSF Grant No. 0534359 • NSF Graduate Research Fellowship

  17. References

More Related