1 / 1

Results/Conclusions:

T PC. Camera. Pattern. T EE-C. Segmented Object Database. Live Video with Graphics Overlay. End Effector. Video Link To Workstation. T obj-ccd. T B-EE. Real Objects. base. T j2-j3. J2. T ee-ccd. T BP. Camera CCD. Camera FOV. T j1-j2.

Download Presentation

Results/Conclusions:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TPC Camera Pattern TEE-C Segmented Object Database Live Video with Graphics Overlay End Effector Video Link To Workstation Tobj-ccd TB-EE Real Objects base Tj2-j3 J2 Tee-ccd TBP Camera CCD Camera FOV Tj1-j2 Figure 4: Using the Microscribe A real-time augmentation of objects within a phantom is achieved from any direction. Robot End Effector Serial Link To Workstation Tj3-ee J3 Tobj-base Tb-j1 J1 Tb-ee The Video Camera base Real-time AR Scenes The Passive Robot Medical Robot Vision Augmentation—A Prototype Abhilash Pandya1, Ph.D. Cand, Mohammad Siadat2, Ph.D. Cand, Zhengmao Ye1, Ph.D., Prasad Manda1 M.S. , Greg Auner1, Ph.D. , Lucia Zamorano3, MD., Michael Klein 4, MD. Wayne State University : 1Electrical and Computer Engineering Department, 2Computer Science Department, 3Neurosurgery Department , 4Children’s Hospital of Michigan Contact Information: apandya@ece.eng.wayne.edu (313) 577-9921 Results/Conclusions: In computer graphics, AR is achieved by the alignment of the virtual camera with the actual camera and the virtual object with the corresponding actual object. The techniques of texture mapping and 3D rendering are used to visualize/overlay the virtual segmented objects within the input video. Camera distortion must be considered for an accurate view of the scene and video correction algorithms are available which account for the major sources of camera distortion, namely radial, decentering, and thin prism distortion. Figure 4 shows the real-time augmentation of the objects of interest. The average system error was measured at 4.5mm and it included the error of the Microscribe passive arm (0.87mm), the error of the CT scan (2mm slices), the error of the camera calibration, pair-point object registration and other individual errors. The system produces good results, improvement in error can be achieved with higher quality cameras and better registration algorithms. Our plans are to evaluate this technique with human factors studies to ascertain whether this technique is truly beneficial to the surgeon to optimize their surgeries. Motivation: With the use of robotics, surgeons can direct a robot to precise pre-planned locations within the brain (e.g. ISS Neuromate Fig. 1A) or use hand controllers to guide robotic surgeries (Computer Motions, Inc. Zeus Fig 1B.). Both are in use at our facilities. These systems are potential targets for an advanced visualization technology-- Augmented Reality (AR). An AR system generates a combination of the real scene viewed by the robot and a virtual scene (3D segmentation) generated by the computer that augments the scene with additional information. We have developed a prototype system with a camera system mounted on the end-effector of a Microscribe passive arm and superimpose the camera's view with anatomical structures correctly registered with the patient. This system allow the surgeon an "X ray" view from any robotic trajectory. We will discuss our prototype and the errors involved in the generation of an AR scene. Robotic Tracking of Camera: One of the main problems for accurate augmentation is the determination of the exact location and orientation of the CCD array inside the camera relative to the end-effector of the tracking device (in this case the robot). In figure 2A, a forward kinematics solution (i.e. the end-effector of the robot coordinates in terms of the base coordinates) would be defined as the concatenation of all the individual joint matrices. Each individual transform specifies how the first joint is related to the second joint. The combination of the matrices, define the position and orientation of the end-effector in the base coordinate system. An additional step is needed to compute the transformation between the ccd camera coordinate system and the end-effector. This is done with an image processing technique in which camera parameter estimation leads to a transformation from a viewed pattern to the camera. Next a computation is done using the base to end-effector to produce the needed object to ccd transform. This computed relationship (See Figure 2B) allows the alignment of the actual camera coordinates with the coordinate system of the virtual camera. Object registration, the process to determine the exact location of the objects of interest also has to be performed. Augmented Reality Scene Data Generation: Based on a CT scan of a plastic phantom with simulated objects of interest glued inside, the needed 3D models were created (Figure 3). Once the position and orientation of the camera is known, an AR scene can be generated. 2A Figure 2A: Computation of Base to End-Effector Transform. Figure 2B: Computation of the Object to CCD transformation. 2B Figure 1A: Neuromate (ISS) A robot used for Neurosurgery. Figure 1B: Zeus (Computer Motions) used for robotic surgeries. • Neuromate (ISS) A robot used for Neurosurgery. • Zeus (Computer Motions) used for robotic surgeries. 1B 1A Figure 3: From CT Scan to 3D Models for AR scene.

More Related