1 / 49

P roximity Computations between Noisy Point Clouds using Robust Classification

P roximity Computations between Noisy Point Clouds using Robust Classification. 1 J ia Pan, 2 Sachin Chitta , 1 Dinesh Manocha 1 UNC C hapel Hill 2 Willow Garage. http://gamma.cs.unc.edu/POINTC/. M ain Result.

fahim
Download Presentation

P roximity Computations between Noisy Point Clouds using Robust Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Proximity Computations between Noisy Point Clouds using Robust Classification 1Jia Pan, 2Sachin Chitta, 1Dinesh Manocha 1UNC Chapel Hill 2Willow Garage http://gamma.cs.unc.edu/POINTC/

  2. Main Result • Proximity computation and collision detection algorithm between noisy point cloud data • Computes collision probability instead of simple yes-no result • Important for safety and feasibility of robotics applications

  3. Proximity and Collision Computations • Geometric reasoning with noisy point cloud data vs. mesh based representations • Integral part of motion planning and grasping algorithms • Contact computations for dynamic simulation

  4. Background

  5. Motion Planning: Assumptions • Motion planning has a long history (30+ years) • Assumptions • Exact environment (mesh world) • Exact control • Exact localization • No joint limit and torque limit • No quality control • Output: • Collision-free path

  6. Environment Uncertainty • Uncertainty can be large due to sensor error, poor sampling, physical representations, etc. • Need to model order to improve the safety and robustness of motion planning • Relative less work • Previous methods only consider 2D cases with specific assumptions on uncertainty

  7. Sensors • Robot uses sensors to compute a representation of the physical world • But sensors are not perfect or may not be very accurate….

  8. Robot Sensors: Data Collection Cameras: may have low resolution

  9. Robot Sensors: Data Collection • Laser Scanners: may have limited range

  10. Point Clouds Captured by Robot

  11. Recent Trend: Depth Cameras Primesense Camcube Swissranger 4000 Kinect Pr2’s

  12. Kinect http://graphics.stanford.edu/~mdfisher/Kinect.html

  13. Kinect Reconstruction http://www.cs.washington.edu/ai/Mobile_Robotics/projects/rgbd-3d-mapping/ http://groups.csail.mit.edu/rrg/index.php?n=Main.VisualOdometryForGPS-DeniedFlight

  14. Uncertainty From RGB-D Sensors • Sensors may have low resolution  low resolution of point clouds • Kinect has relatively high resolution, but may still be not enough for objects far away • Sensors may be influenced by noise, especially in outdoor environments  noise in point clouds • Sensors may have limited ranges (near range and far range)  unknown areas

  15. Handling Noisy Point Cloud • Planning, navigation and grasping • Scene reasoning • Noisy data • Real-time processing

  16. Related Work • Collision Detection for meshes • Fast and robust • Not designed for (noisy) point-clouds • Motion planning with environment uncertainty • 2D polygons • Vehicle planning

  17. Uncertainty Model of Point Cloud

  18. Errors in Point Clouds Discretization (sampling) error

  19. Errors in Point Clouds Position error

  20. Point Cloud Collision Detection In-collision In-collision ?

  21. Point Cloud Collision Detection In-collision In-collision ?

  22. Point Cloud Collision Detection collision-free Collision-free ?

  23. Point Cloud Collision Detection Collision-free Collision-free?

  24. Handling Point Cloud Collision: Two Methods Mesh Collision Reconstruction Point Cloud Collision

  25. Mesh Reconstruction => Collision Reconstruction Mesh Collision Reconstruction is more difficult then collision detection Solve an easier problem by conquering a more difficult one?

  26. Mesh Reconstruction => Collision • Reconstruction process is not robust, and is sensitive to noise and high order features • Error in reconstructed result can be amplified by subsequent collision checking • Reconstruction process is slow (few seconds) • The final result is YES/NO answer, which is sensitive to noise.

  27. Use Ideas from Classification • Two methods to classify two sets (Machine Learning) • Generative model • First estimate the joint distribution (more difficult!) • Then compute the classifier • Discriminative function • Directly compute the classifier • If we only need to classify, discriminative function usually is the better choice.

  28. Our Solution • Return to the basic definition of collision-free • Two objects are collision-free if they are separable by a continuous surface and is in-collision when such surface does not exist.

  29. Classification-based Collision Detection • Find a separating surface that separates two points clouds as much as possible

  30. Collision Detection based on Robust Classification • We compute the optimal (i.e. minimize the separating error) separating surface using a SVM-like algorithm • Use supervised machine learning methods for geometric classification • Different from standard SVM: each training data point has noise – corresponds to robust classification in machine learning

  31. Robust Classification Robust Classification: aware of noise Standard SVM

  32. Compute the Separating Surface

  33. Per-point Collision Probability • Collision probability: the probability that one point is on the wrong side of separating surface. • Robust classification computes collision probability for each single point sample

  34. Probabilistic Collision between Two Objects • For each object • Cluster the points and only keep one point in each cluster: compute collision probability for independent points • Overall object collision probability

  35. Three Collision Cases Deep collision In-contact Large distance Collision probability near 1 Collision probability near 0.5 Collision probability near 0 Difficult: small noise will bring reverse of yes/no answer Easy Easy

  36. Results: Small Noise Deep Collision Configurations In Contact Configurations (Difficult) Large Distance Configurations Very few configurations are in the difficult region

  37. Results: Large Noise Deep Collision Configurations In Contact Configurations (Difficult) Large Distance Configurations More configurations are in the difficult region!

  38. PR2 Robot Sensor Results Deep Collision Configurations In Contact Configurations (Difficult) Large Distance Configurations same distance to obstacle collision probability with wide spread.

  39. Range of Collision Probability • Collision probability’s wide range: • It is a more complete description of collision state than distance or yes/no answer • Important for grasping or planning in constrained environment

  40. Kinect Data Office data from Peter Henry, Dieter Fox @ RSE-lab UW

  41. Result on Kinect Data

  42. BVH Hierarchy Acceleration • Bounding volume hierarchy is widely used in mesh collision detection for acceleration • Basic idea: decompose objects into many cells and cull collision test between cells that are far away.

  43. Application to Motion Planning • Use overall collision probability for the objects as a cost in motion planning algorithms to compute the trajectory with minimum collision probability

  44. Other Applications • The per-point collision probability is more useful • Provides finer control in terms of handling environment uncertainty • Can use work space information to guide the planning procedure in order to avoid collision

  45. Conclusions • A robust proximity computation and collision detection algorithm for noisy point cloud data • Problem reduced to robust classification • Initial results on point-cloud data from PR2 sensors

  46. Future Work • Currently we directly use point clouds, which can only model the space with/without obstacles • Due to sensor range, part of the space is unknown • So we will apply our algorithm to the data structure that can encode ‘unknown’ space, such as the octomap in ROS • Also more useful for sensor with dense resolution, like kinect

  47. Future Work • Currently implementation is on static models • Extend to dynamic environments • New objects added or deleted • Handle deformable objects and update sensor uncertainty • SVM has incremental variations that can handle dynamic data

  48. Acknowledge • National Science Foundation • Army Research Office • Willow Garage

  49. Thanks

More Related