1 / 26

KinectFusion : Real-Time Dense Surface Mapping and Tracking

KinectFusion : Real-Time Dense Surface Mapping and Tracking. by Richard.A Newcombe et al . Presenting: Boaz Petersil. Motivation. Augmented Reality. Robot Navigation. 3d model scanning. Etc. Related Work. Tracking (&sparse Mapping). Bundle-adjustment(offline ). PTAM.

rehan
Download Presentation

KinectFusion : Real-Time Dense Surface Mapping and Tracking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. KinectFusion: Real-Time Dense Surface Mapping and Tracking by Richard.ANewcombeet al. Presenting: Boaz Petersil

  2. Motivation Augmented Reality Robot Navigation 3d model scanning Etc..

  3. Related Work Tracking (&sparse Mapping) Bundle-adjustment(offline) PTAM Tracking&Mapping Kinect Fusion DTAM (RGB cam!) Dense Mapping

  4. Challenges • Tracking Camera Precisely • Fusing and De-noising Measurements • Avoiding Drift • Real-Time • Low-Cost Hardware

  5. Proposed Solution • Fast Optimization for Tracking, Due to High Frame Rate. • Global Framework for fusing data • Interleaving Tracking & Mapping • Using Kinect to get Depth data (low cost) • Using GPGPU to get Real-Time Performance (low cost)

  6. How does kinect work?

  7. Method

  8. Tracking • Finding Camera position is the same as fitting frame’s Depth Map onto Model Tracking Mapping

  9. Tracking – ICP algorithm • icp = iterative closest point • Goal: fit two 3d point sets • Problem: What are the correspondences? • Kinect fusion chosen solution: • Start with • Project model onto camera • Correspondences are points with same coordinates • Find new T with Least - Squares • Apply T, and repeat 2-5 until convergence Tracking Mapping

  10. Tracking – ICP algorithm • Assumption: frame and model are roughly aligned. • True because of high frame rate Tracking Mapping

  11. Mapping • Mapping is Fusing depth maps when camera poses are known • Problems: • measurements are noisy • Depth maps have holes in them • Solution: • using implicit surface representation • Fusing = estimating from all frames relevant Tracking Mapping

  12. Mapping – surface representation • Surface is represented implicitly - using Truncated Signed Distance Function (TSDF) Voxel grid • Numbers in cells measure voxel distance to surface – D Tracking Mapping

  13. Mapping Tracking Mapping

  14. Mapping d= [pixel depth] – [distance from sensor to voxel] Tracking Mapping

  15. Mapping Tracking Mapping

  16. Mapping Tracking Mapping

  17. Mapping Tracking Mapping

  18. Mapping • Each Voxel also has a weight W, proportional to grazing angle • Voxel D is the weighted average of all measurements sensor2 sensor1 Tracking Mapping

  19. Handling drift • Drift would have happened If tracking was done from frame to previous frame • Tracking is done on built model Tracking Mapping

  20. Results & Applications

  21. Pros & Cons • Pros: • Really nice results! • Real time performance (30 HZ) • Dense model • No drift with local optimization • Robust to scene changes • Elegant solution • Cons : • 3d grid can’t be trivially up-scaled

  22. Limitations • doesn’t work for large areas (Voxel-Grid) • Doesn’t work far away from objects (active ranging) • Doesn’t work out-doors (IR) • Requires powerful Graphics card • Uses lots of battery (active ranging) • Only one sensor at a time

  23. Future Ideas • Keep Voxel Grid only for near sensor aera • Compress out of sensor reach model into ‘smart’ vertices map

  24. Thank you!

More Related