1 / 1

Automatic Detection and Segmentation of Robot-Assisted Surgical Motions

Automatic Detection and Segmentation of Robot-Assisted Surgical Motions. THE RESULTS Linear discriminant analysis a robust tool for reducing and separating surgical motions into a space more conducive to gesture recognition. Achieved >90% recognition rates on 15 datasets.

ishi
Download Presentation

Automatic Detection and Segmentation of Robot-Assisted Surgical Motions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic Detection and Segmentation of Robot-Assisted Surgical Motions • THE RESULTS • Linear discriminant analysis a robust tool for reducing and separating surgical motions into a space more conducive to gesture recognition. • Achieved >90% recognition rates on 15 datasets. • In one test, we reduced 72 feature vectors into 3 dimensions with 6 classes and still achieved nearly 90% recognition. • THE PROBLEM • Need to assess and quantify technical surgical skill objectively. • Surgical training traditionally an interactive and slow process. • Lack of fully documented surgical procedures that can be quickly mined or stored. The result of LDA reduction with m=6 and d=3. The expert surgeon’s motions (left) separate more distinctly than the less experienced surgeon’s (right). A video frame of the suture task used for this study. • THE SOLUTION • Recognize elementary motions that occur in a simplified surgical task. • Robot motion analysis of users with varying daVinci (Intuitive Surgical) robot experience studied. • Divided task into functional modules. Applied statistical methods, such as linear discriminant analysis (LDA) and probabilistic Bayes classifier. Left, the results of grouping the motion categories and varying the dimension of the projected space. In the second column, the number of unique integers indicates the number of motion categories, and the position of the integer indicates which motions belong to that category. Right, the results of testing a recognition model trained on both expert and intermediate surgeon. Class was set to 12345566, with t=10 and s=2. Note that both expert and intermediate data was similarly recognized. • PEOPLE INVOLVED • Graduate Students: Henry Lin, Todd E. Murphy • Engineering Faculty: Gregory D. Hager, Ph.D., Izhak Shafran, Ph.D., Allison M. Okamura, Ph.D. • Medical Faculty: David D. Yuh, M.D. Functional block diagram of the system used to recognize elementary surgical motions in this study. • REFERENCES • Henry Lin, et al., “Automatic Detection and Segmentation of Robot-Assisted Surgical Motions,” submitted toMICCAI, 2005. • SUPPORTED BY: • NSF A plot of the Cartesian positions of the left master, identified coded by surgical gesture, during performance of a suturing task. The left plot is that of an expert surgeon while the right is of a less experienced surgeon. Engineering Research Center for Computer Integrated Surgical Systems and Technology

More Related