1 / 35

Welcome to the First Workshop on R G B - D : Advanced Reasoning with Depth Cameras!

Xiaofeng Ren: Intel Labs Seattle Dieter Fox: UW and Intel Labs Seattle Kurt Konolige: Willow Garage Jana Kosecka: George Mason University. Welcome to the First Workshop on R G B - D : Advanced Reasoning with Depth Cameras!. Workshop Schedule. 9:00am - 09:10am Welcome

toya
Download Presentation

Welcome to the First Workshop on R G B - D : Advanced Reasoning with Depth Cameras!

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Xiaofeng Ren: Intel Labs Seattle Dieter Fox: UW and Intel Labs Seattle Kurt Konolige: Willow Garage Jana Kosecka: George Mason University Welcome to the First Workshop on RGB-D: Advanced Reasoning with Depth Cameras!

  2. Workshop Schedule 9:00am - 09:10am Welcome 09:10am - 09:50am Overview and RGB-D Research at Intel Labs and UWD. Fox and X. Ren; University of Washington, Intel Labs Seattle 09:50am - 10:30am Invited Talk (Vision and Graphics)C. Theobalt; Max Planck Institute 10:30am - 11:00am Semantic Parsing in Indoor and Outdoor ScenesJ. Kosecka; George Mason University 11:00am - 11:30am Coffee Break 11:30am - 11:50am 3D Pose Estimation, Tracking and Model Learning of Articulated Objects from Dense Depth Video using Projected Texture StereoJ. Sturm, K. Konolige, C. Stachniss, W. Burgard; Univ. of Freiburg and Willow Garage 11:50am - 12:10pm Learning Deformable Object Models for Mobile Robot Navigation using Depth Cameras and a Manipulation Robot B. Frank, R. Schmedding, C. Stachniss, M. Teschner, W. Burgard; Univ. of Freiburg 12:10pm - 12:30pm 3D Indoor Mapping for Micro-UAVs Using Hybrid Range Finders and Multi-Volume Occupancy GridsW. Morris, I. Dryanovski, J. Xiao; City College of New York 12:30pm - 01:40pm Lunch Break 01:40pm - 02:20pm Invited Talk (Robotics and Vision)P. Newman; Oxford University 02:20pm - 03:00pm 3D Modeling and Object Recognition at Willow Garage R. Rusu, K. Konolige; Willow Garage 03:00pm - 04:00pm Poster Session and Wrap-Up Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  3. Dieter Fox Xiaofeng Ren Intel Labs Seattle University of Washington Department of Computer Science & Engineering RGB-D Overview and Work at UW and Intel Labs Seattle

  4. Outline • RGB-D: adding depth to color • Dense 3D mapping • Object recognition and modeling • Discussion Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  5. 3D Scanning in Robotics Panning 2D scanner, Velodyne, time of flight cameras, stereo Still very expensive, substantial engineering effort, not dense Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  6. RGB-D:Recent Developments • Soon we’ll have cheap depth cameras with high resolution and accuracy (>640x480, 30 Hz) • Key industry drivers: Gaming, entertainment • Two main techniques: • Structured light with stereo • Time of flight Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  7. Hands Free Gaming Microsoft Natal promo video Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  8. RGB-D: Raw Data Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  9. Outline • RGB-D: adding depth to color • Dense 3D mapping • Object recognition and modeling • Discussion Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  10. RGB-D Mapping [Henry-Herbst- Krainin-Ren-F] Visual odometryvia frame to frame matching Loop closure detection via 3D feature matching Optimization via TORO, SBA Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  11. Kitchen Sequence Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  12. Visual Odometry Point-to-plane Point-to-point Point-to-edge • Standard point cloud ICP not robust enough • Limited FOV, lack of features for data association • Add sparse visual features (SIFT, Canny edges) • Improved data association, might fail in dark areas Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  13. RBG-D Mapping Algorithm Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  14. 3D Mapping Data processing: 4 frames / sec Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  15. Intel Lab Flythrough Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  16. Allen Center Flythrough Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  17. Mapping Accuracy Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  18. Surfel Representation “Surface Elements” – circular disks representing local surface patches Introduced by graphics community [Pfister ‘00], [Habbecke ‘07], [Weise ‘09] Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  19. Surfel Representation Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  20. Outline • RGB-D: adding depth to color • Dense 3D mapping • Object recognition and modeling • Discussion Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  21. Toward a Robotic Object Database [Krainin-Henry-Lai-Ren-F] Enable robots to autonomously learn new objects Robot picks up objects and builds models of them Models can be shared among robots Models can contain meta data(where to find, how to grasp, material, what to do with it …) Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  22. Encoders for Object Modeling Commonly used but requires high accuracy e.g. [Sato ‘97], [Kraft ’08] Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  23. Articulated ICP for Arm Tracking Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  24. Simultaneous Tracking and Object Modeling Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington Builds object model on-the-fly Jointly tracks hand and object ICP incorporates dense points, SIFT features, and color gradients

  25. Tracking Results Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  26. Object Models Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  27. Handling Multiple Grasps • Switching Kalman filter • Examining object • Moving to or from table • Grasping or releasing • Between grasps • Second grasp should be computed from partial model Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  28. [Lai-Bo-Ren-F: RSS-09, IJRR-10] Object Recognition 159 objects 31 classes 12,554 video frames Shape based segmentation Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  29. Early Results Learn local distance function for each object Sparsification via regularization Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  30. Outline • RGB-D: adding depth to color • Dense 3D mapping • Object recognition and modeling • Discussion Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  31. Conclusion • New breed of depth camera systems can have substantial impact on • mapping (3D, semantic, …) • navigation (collision avoidance, 3D path planning) • manipulation (grasping, object recognition) • human robot interaction (detect humans, gestures, …) • Currently mostly constrained to indoors, but outdoors possible too Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  32. Some RGB-D Questions • Which problems become easy? • Gesture recognition? Grasping? Segmentation? 3D mapping? Object modeling? • Which problems become (more) tractable? • Dense 3D mapping? Object recognition? • What are the new research areas / opportunities generated by RGB-D? • Graphics, visualization, tele-presence • HRI, activity recognition Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  33. More Questions • What’s the best way to combine shape and color? • depth just an additional dimension? • interest points, feature descriptors, segmentation • How to take advantage of geometric info? • on top of, next to, supports, … • Is depth always necessary? • vision often seems more efficient • can we use RGB-D to train fast RGB systems? Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  34. Some RGB-D Questions • Hardware: What can we expect in the near future? • Real-time dense 3D reconstruction / mapping • Representation: planes, meshes, surfels, geometric primitives, texture, articulation • Registration: 3D points vs. visual features • Semantic mapping / object recognition • What does 3D add: interest points, feature descriptors, segmentation, spatial information • Humans • Detection, tracking, pose estimation • Gesture and activity recognition Fox / Ren: RGB-D at Intel Labs Seattle and Univ. of Washington

  35. Brian Ferris, Peter Henry, Evan Herbst, Jonathan Ko, Michael Krainin, Kevin Lai, Cynthia Matuszek Post-docs: Liefeng Bo, Marc Deisenroth Intel research: MatthaiPhilipose, XiaofengRen, Josh Smith UW Robotics and State Estimation LabIntel Labs Seattle

More Related