1 / 34

Manipulation Under Uncertainty (or: Executing Planned Grasps Robustly)

Manipulation Under Uncertainty (or: Executing Planned Grasps Robustly). Kaijen Hsiao Tom á s Lozano-P é rez Leslie Kaelbling Computer Science and Artificial Intelligence Lab, MIT NEMS 2008. Manipulation Planning.

daryl
Download Presentation

Manipulation Under Uncertainty (or: Executing Planned Grasps Robustly)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Manipulation Under Uncertainty(or: Executing Planned Grasps Robustly) Kaijen Hsiao Tomás Lozano-Pérez Leslie Kaelbling Computer Science and Artificial Intelligence Lab, MIT NEMS 2008

  2. Manipulation Planning If you know all shapes and positions exactly, you can generate a trajectory that will work

  3. Even Small Uncertainty Can Kill

  4. Moderate Uncertainty (not groping blindly) • Initial conditions (ultimately from vision) • Object shape is roughly known (contacted vertices should be within ~1 cm of actual positions) • Object is on table and pose (x, y, rotation) is roughly known (center of mass std ~5 cm, 30 deg) • Online sensing: • robot proprioception • tactile sensors on fingers/hand • Planned/demonstrated trajectories (that would work under zero uncertainty) given

  5. Model uncertainty explicitly • “Belief state”: probability distribution over positions of object relative to robot • Use online sensing to update belief state throughout manipulation (SE) • Select manipulation actions based on belief state (π) Controller belief  SE sensing action Environment

  6. State Estimation • Transition Model: how robot actions affect the state • Do we move the object during the grasp execution? (currently, any contact spreads out the belief state somewhat) • Observation Model: P(sensor input | state) • How consistent are various object positions with the current sensory input (robot pose and touch)? • Bayes’ Rule

  7. Control: Three approaches • Formulate as a POMDP, solve for optimal policy • Continuous, multi-dimensional state, action, observation spaces • ->Wildly intractable • Find most likely state, plan trajectory, execute • Bad if rest of execution is open loop • Maybe good if replanning is continuous, but too slow for execution-time • Will not select actions to gain information • Our approach: define new robust primitives, use information state to select plan, execute

  8. Robust Motion Primitive Move-until(goal, condition): Repeat until belief state condition is satisfied: • Assume object is in its most likely location • Guarded move to object-relative goal • If contact is made: • Undo last motion • Update belief state Termination conditions: • Claims success: robot believes, with high probability, that it is near the object-relative goal • Claims failure: some number of attempts have not achieved the belief condition

  9. Robust primitive Most likely robot-relative position Where it actually is

  10. Initial belief state (X, Y, theta)

  11. Summed over theta (easier to visualize)

  12. Tried to move down; finger hit corner

  13. Probability of observation | location

  14. Updated belief

  15. Re-centered around mean

  16. Trying again, with new belief Back up Try again

  17. Executing a trajectory • Given a sequence of way-points in a trajectory • Attempt to execute each one robustly using move-until • So, now we can try to close the gripper on the box:

  18. Final state and observation Observation probabilities Grasp

  19. Updated belief state: Success! Goal: variance < 1 cm x, 15 cm y, 6 deg theta

  20. What if Y coord of grasp matters?

  21. Need explicit information gathering

  22. Use variance of belief to select trajectory If this is your start belief, just run grasp trajectory

  23. The Approach Trajectories (grasp, poke, …) current belief relative motion command generator robot commands strategy selector belief update world most likely state policy sensor observations

  24. Strategy Selector • Planner to automatically pick good strategies based on start uncertainties and goals • Simulate all particles forward using selected robot movements, including tipping probabilities (tipping = failure) • Group into qualitatively similar outcomes • Use forward search to select trajectories/info-gathering actions • Currently use hand-written conditions on belief state

  25. Grasping a Brita Pitcher Target grasp: Put one finger through the handle and grasp

  26. Belief-Based Controller w/2 Info-Grasps

  27. Brita Results Increasing uncertainty

  28. Related Work • Grasp planning without regard to uncertainty (can be used as input to this research) (Lozano-Perez et al, 1992, Saxena et al, 2008) • Finding a fixed trajectory that is likely to succeed under uncertainty (Alterovitz et al. 2007, Burns and Brock 2007, Melchior and Simmons 2007, Prentice and Roy 2007) • Visual servoing (tons of work) • Using tactile sensors to precisely locate object before grasping (Petrovskaya et al. 2006) • Regrasping to find stable grasp positions (Platt, Fagg, Grupen, 2002) • POMDPs for grasping (Hsiao et al. 2007)

  29. Current Work • Real robot results (7-DOF Barrett Arm/Hand and Willow Garage PR2) • Automatic strategy selection

  30. Key Ideas • Belief-based strategy: • Maintain a belief state (updated based on actions and observations) • Express your actions relative to the current best state estimate • Choose strategies based on higher-order properties of your belief state (variance, bimodality, etc).

  31. Acknowledgements • This material is based upon work supported by the National Science Foundation under Grant No. 0712012. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

  32. The End.

  33. Box Results Goal: 1 cm x, 1 cm y, 6 degrees theta Object uncertainty: standard deviations of 5 cm x, 5 cm y, 30 degrees theta Mean state controller with info-grasp 120/122, 98.4%

  34. Cup Results Goal: 1 cm x, 1 cm y Uncertainty std Met Goal 1 cm, 30 deg 150/152 (98.7%) 3 cm, 30 deg 62/66 (93.9%) 5 cm, 30 deg 36/40 (90.0%)

More Related