1 / 22

3D M otion D etermination U sing µ IMU A nd V isual T racking

14 May 2010. 3D M otion D etermination U sing µ IMU A nd V isual T racking. Supervised by Prof. Li Lam Kin Kwok, Mark. Centre for Micro and Nano Systems The Chinese University of Hong Kong. Outline. Brief summary of previous works Detail of Visual Tracking System (VTS)

dalmar
Download Presentation

3D M otion D etermination U sing µ IMU A nd V isual T racking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 14 May 2010 3DMotionDetermination Using µIMU And Visual Tracking Supervised by Prof. Li Lam Kin Kwok, Mark Centre for Micro and Nano Systems The Chinese University of Hong Kong

  2. Outline • Brief summary of previous works • Detail of Visual Tracking System (VTS) • Perspective Camera Model • Procedure of Pose Estimation • Current Results of VTS • Conclusion • Future Plan

  3. Previous Works • Implement Harris Corner Finding Algorithm • Automatic finding good features • Improve the performance of LK Tracking Method • Reduce the noise generated by inconstant lighting • Find some information about high speed camera (>60fps)

  4. Previous Works

  5. Detail of Visual Tracking System Select ROIfrom captured image Extract Good Features • (Harris Algorithm) Pose Estimation Motion Tracking • (LK Tracking Method) Position and Orientation • (Camera Coordinate) Coordinate Transformation Final Pose of Camera (World Coordinate)

  6. Perspective Camera Model wy cy { W } { C } wx cx { W } : World Coordinate { I } : Image Coordinate { C } : Camera Coordinate C wz cz p1 p2 f p3 p4 li P1 P2 Image Plan P4 P3 v { I } u Optical Axis C: Optical Center f : Focal Length li : Distance between 3D feature points and the optical center Pi : 3D Feature Points on the square grid pi : Corresponding 2D projected image points Square Grid

  7. Perspective Camera Model • Relationship between image point and 3D scene point Optical Axis cX Scene cx cZ Image Plan f z { C } x Optical Center

  8. Pose Estimation Procedure Calibrate camera (obtain interior parameter) Target Dimension Target Image Step 1: Calibration and Measurement Calculate distance between target and camera Step 2: Recover Pose of Camera ( Respect to Camera Coordinate) Calculate Transformation Matrix Step 3: Recover Transformation Matrix between Camera to World Coordinate Step 4: Transform the coordinate to World Coordinate Final Pose

  9. Pose Estimation (Step 1) • Using square pattern (with known dimensions) to calibrate a camera

  10. Pose Estimation (Step 2) cy { C } cx • Image to Camera Coordinate Transformation cz C Image Coordinate: p1 p2 f p3 p4 Camera Coordinate: v Optical Axis { I } u Image Plan (uo , vo) is image principal point

  11. Pose Estimation (Step 2) • Areas of triangles (Given): • Volumes of tetrahedra: • Use unit vector cui to represent cPi { C } cx C cui cy cz cP2 cP1 cP4 h cP3 P2 P1 P3 (From Step 1) P4

  12. Pose Estimation (Step 2) • Use vectors to calculate Volume: • Express d2, d3, d4as a function of d1: { C } cx C cui cy cz cP2 cP1 cP4 h cP3 P2 P1 P3 P4

  13. Pose Estimation (Step 2) • Use a line segment s1k to compute squared distance: • Use parametric representation and simplify { C } cx C cu1 cu2 cy cz cP2 cP1 P2 s12 P1

  14. Pose Estimation (Step 2) • Substitute d1 into the following equation to obtain the 3D coordinates of the feature points:

  15. Pose Estimation (Step 3) • Transformation Matrix wTo is given • Transformation matrix oTc can be obtained by step 2 oy cx cy oz { O } oTc cz { W } : World Coordinate { O } : Object Coordinate { C } : Camera Coordinate { C } ox wy wTo wx { W } wz

  16. Pose Estimation (Step 4) • The Final Pose of camera can be solved oy cx cy oz { O } oTc cz { C } ox wTc wy wTo wx { W } wz

  17. Current Results of VTS • Experimental Setup Webcam Feature Motion Recording Computer Ruler

  18. Current Results of VTS

  19. Conclusion • Heavily depend on image points • Increase image resolution (Now using 640 X 480 pixels) • Use some optimization methods to increase accuracy • Gauss-Newton Line search method

  20. Future Plan • Develop this method and test the performance • Try to fuse the data with the µIMU data • Develop the optimization method after finishing data fusion

  21. Reference [1] Abidi M.A. , Chandra T., “A new efficient and direct solution for estimation using quadrangular targets: algorithm and evaluation,” IEEE transactions on pattern analysis and machine intelligence, Vol.17, No.5, pp.534-538, 1995. [2] Abidi M.A. , Chandra T., “Pose estimation for camera calibration and landmark tracking,” IEEE International Conference on Robotics and Automation, 2009. [3] Forsyth Ponce, “Computer Vision: A Modern Approach,” Prentice Hall, 2003

  22. Thanksfor your attention

More Related