1 / 26

System Integration and Experimental Results

Visual Perception and Robotic Manipulation Springer Tracts in Advanced Robotics. System Integration and Experimental Results. Chapter 7. Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia. Geoffrey Taylor

brenna
Download Presentation

System Integration and Experimental Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Visual Perception and Robotic Manipulation Springer Tracts in Advanced Robotics System Integration and Experimental Results Chapter 7 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Geoffrey Taylor Lindsay Kleeman

  2. Overview • Stereoscopic light stripe scanning • Object Modelling and Classification • Multicue tracking (edges, texture, colour) • Visual servoing • Real-world experimental manipulation tasks with an upper-torso humanoid robot

  3. Motivation • To enable a humanoid robot to perform manipulation tasks in a domestic environment: • A domestic helper for the elderly and disabled • Key challenges: • Ad hoc tasks with unknown objects • Robustness to measurement noise/interference • Robustness to calibration errors • Interaction to resolve ambiguities • Real-time operation

  4. Architecture

  5. Light Stripe Scanning Scanned object • Triangulation-based depth measurement. D B Stripe generator Camera

  6. Stereo Stripe Scanner • Three independent measurements provide redundancy for validation. Scanned object X Left image plane Right image plane xL xR Laser diode Left camera L Right camera R θ 2b

  7. Reflections/Cross Talk

  8. Single Camera Result Single camera scanner Robust stereoscopic scanner

  9. 3D Object Modelling • Want to find objects with minimal prior knowledge. • Use geometric primitives to represent objects • Segment 3D scan based on local surface shape. Surface type classification

  10. Segmentation • Fit plane, sphere, cylinderand cone to segments. • Merge segments to improve fit of primitives. Raw scan Surface type classification Final segmentation Geometric models

  11. Object Classification • Scene described by adjacency graph of primitives. • Objects described by known sub-graphs.

  12. Modeling Results • Box, ball and cup: Raw colour/range scan Textured polygonal models

  13. Multi-Cue Tracking • Individual cues are only robust under limited conditions: • Edges fail in low contrast, distracted by texture • Textures not always available, distracted by reflections • Colour gives only partial pose • Fusion of multiple cues provides robust tracking in unpredictable conditions.

  14. Tracking Framework • 3D Model-based tracking: models modelled from light stripe range data. • Colour(selector), edges and texture (trackers) are measured simultaneously in every frame. • Measurements fused in Extended Kalman filter: • Cues interact with state through measurement models • Individual cues need not recover the complete pose • Extensible to any cues/cameras for which a measurement model exists.

  15. Colour Cues • Filter created from colour histogram in ROI: • Foreground colours promoted in histogram • Background colours supressed in histogram Output of resulting filter Captured image used to generate filter

  16. Edge Cues Predicted projected edges Sobel mask directional edges Combine with colour to get silhouette edges Fitted edges

  17. Texture Cues Rendered prediction Feature detector Matched templates Outlier rejection Final matched features

  18. Tracking Result

  19. Visual Servoing • Position-based 3D visual servoing (IROS 2004). • Fusion of visual and kinematic measurements.

  20. Visual Servoing • 6D pose of hand estimated using extended Kalman filter with visual and kinematic measurements. • State vector also includes hand-eye transformation and camera model parameters for calibration.

  21. Grasping Task • Grasp a yellow box without prior knowledge of objects in the scene.

  22. Grasping Task

  23. Pouring Task • Pour the contents of a cup into a bowl.

  24. Pouring Task

  25. Smell Experiment • Fusion of vision, smell and airflow sensing to locate and grasp a cup containing ethanol.

  26. Summary • Integration of stereoscopic light stripe sensing, geometric object modelling, multi-cue tracking and visual servoing allows robot to perform ad hoc tasks with unknown objects. • Suggested directions for future research: • Integrate tactile and force sensing • Cooperative visual servoing of both arms • Interact with objects to learn and refine models • Verbal and gestural human-machine interaction

More Related