1 / 17

Low Infrastructure Navigation for Indoor Environments

Low Infrastructure Navigation for Indoor Environments. October 31, 2012 Arne Suppé suppe@andrew.cmu.edu CMU NavLab Group. Overview. We have demonstrated camera based navigation of a vehicle in a parking garage We propose to:

spence
Download Presentation

Low Infrastructure Navigation for Indoor Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Low Infrastructure Navigation for Indoor Environments October 31, 2012 Arne Suppésuppe@andrew.cmu.edu CMU NavLab Group

  2. Overview • We have demonstrated camera based navigation of a vehicle in a parking garage • We propose to: • Work with AUDI/VW to realistically demonstrate algorithm and collect development data • Prove robustness in wide variety of real-world environments using actual automotive sensors • Explore solutions to reduce computational/data footprint to levels realistic for a vehicle in 5 to 10 years

  3. Why Use Cameras for Navigation? • Cameras and computation are cheap and projected to get cheaper • 3D LIDARs are large, expensive, and not likely to get cheap or rugged enough for automotive environment • High infrastructure costs to equip indoor environments with fiducials or beacons

  4. Cameras are Not Enough • Motion sensors: • Have higher update rates and better incremental precision • Handle cases where camera-based solutions fails • Constrain solutions to make camera-based navigation more tractable • Cameras contain drift in pose estimate • AUDI/VW’s expertise can help us to collect synchronized camera and real automotive motion sensor data to develop and benchmark our algorithms

  5. The Benefit of Combining Camera and Motion Sensors Automotive Gyro Expensive Gyro Drop Out Camera Navigation Position Drift Camera + Automotive Gyro ≈ Expensive IMU

  6. Building on Our Existing Work • Tailor system to take advantage of overhead environments. • Known entrance and egress points to structures • Do not need to solve lost robot problem – only require incremental solution • We already do this when in EKF locked in • Smaller search space than outdoor problem – can employ stronger inference techniques

  7. Alternative Camera Locations • Ceiling very invariant in overhead environments • Classic result in indoor robot navigation. [Thrun 2000] • One camera instead of two • Reduce physical costs • Reduce computational costs • Less data to process • Potential to vastly simplify solution for camera motion Forward Current Camera Locations Alternative Camera Location

  8. Alternative Algorithms • Explore alternative algorithms to measure camera motion to: • Reduce computational cost for position refinement • Reduce V2V and V2I communications requirements to transmit map representations • Improve robustness www.photosynth.net www.123dapp.com

  9. The Virtual Valet Presented by Arne Suppé With work by: HernanBadino. Hideyuki Kume Luis Navarro-Serment & Aaron Steinfeld October 23, 2012

  10. Existing Vision Based Path Tracking • Offline • Build location tagged image database recording reference trajectory • Locations need only be locally consistent • Online • Replay trajectory • Solve global localization problem • Refine position estimate • Fuse with vehicle sensors

  11. Building the Database • Use structure from motion to reconstruct a smooth trajectory of the camera through the environment.[Wu, 2011] Camera poses Feature points

  12. Global Localization • Find relevant images in the database given new image • Returns location of most similar database image • Whole image SURF descriptor – weak similarity metric • Topometric mapping [Badino 2012] • We know which images should be near each other • We know how fast the vehicle is moving Log Probability of Vehicle Location as it Travels Distance 1 2 3 4 5 6 7 8 9 Filter State (Database Images in Traversal Sequence) Database Current

  13. Position Refinement • Recover 6-DOF displacement between database and query image. • Database location + displacement = current global location • Online process – uses GPU accelerated SIFT feature matching and RANSAC homography Database Query Database Query

  14. Sensor Data Fusion • Image matching solution may be noisy, wrong, not exist • EKF fuses camera data with cheap, automotive sensors • Reduces noise while vision contains drift • Estimation used for vehicle control Initialization Lock-On Loss of Lock Lock Reacquired Vehicle Position Covariance Position Tagged Database After Lock-On Matched Database Image Location Global Localization Initialization Position Refinement Position Refinement Vehicle Position Estimate Direction of Travel EKF Fusion EKF Fusion Navigation Cameras Global Position Information

  15. Sensor Data Fusion

  16. Vehicle Platform Navigation Camera • NavLab 11 - 2000 Jeep Wrangler • Throttle, brake, and steering actuators • Crossbow IMU, KVH Fiberoptic yaw gyro, Odometry • Computing • 5 Intel Core i7 M 620, 2 cores @ 2.67 GHz, 8 GB RAM • Command & control, vehicle state, obstacle detection, etc. • 1 Intel Core i7-2600K, 4 cores @ 3.4 GHz, 16 GB RAM • Nvidia GeForce GTX580 Fermi • Structure from motion localization Panorama Camera Collision Warning LIDAR

  17. References • Probablistic Algorithms and the Interactive Museum Tour-Guide Robot MINERVA, S. Thrun, M. Beetz, M. Bennewitz, W. Burgard, A.B. Cremers, F. Dellaert, D. Fox, D. Haehnel, C. Rosenberg, N. Roy, J. Schulte, D. Schulz. International Journal of Robotics Resesarch, 2000. • VisualSFM: A Visual Structure from Motion System, Changchang Wu, http://www.cs.washington.edu/homes/ccwu/vsfm/ • Real-Time Topometric Localization. HernanBadino, Daniel Huber, Takeo Kanade. International Conference on Robotics and Automation, May 2012 • Semi-Autonomous Virtual Valet Parking. Arne Suppe, Luis Navarro-Serment, Aaron Steinfeld. AutomotiveUI 2010

More Related