1 / 16

PEG-IN-HOLE USING DYNAMIC MOVEMENT PRIMITIVES

PEG-IN-HOLE USING DYNAMIC MOVEMENT PRIMITIVES. Fares J. Abu-Dakka, Bojan Nemec, Aleš Ude Department of Automatics, Biocybernetics, and Robotics. 13-Sep-12. Contents. Keywords . Peg-In-Hole, Nonlinear dynamic systems , robot learning. PiH background Force control for PiH

senona
Download Presentation

PEG-IN-HOLE USING DYNAMIC MOVEMENT PRIMITIVES

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PEG-IN-HOLE USING DYNAMIC MOVEMENT PRIMITIVES Fares J. Abu-Dakka, Bojan Nemec, Aleš Ude Department of Automatics, Biocybernetics, and Robotics 13-Sep-12

  2. Contents • Keywords. Peg-In-Hole, Nonlinear dynamic systems, robot learning • PiH background • Force control for PiH • PiH & DMP learning • Experimental results • Conclusion

  3. PiH background • PiH – a classical assembly problem. • PiH – requires position and force control • Force control can be done by • Passive approaches • Active approaches

  4. PiH background • Passive approaches • Needed to ease the insertion process and to reduce contact forces with the surface. • Remote Center of Compliance (RCC) is used. • RCC introduce low lateral and rotational stiffness in the grasping mechanism. • Active approaches • Require force-torque sensing. • While passive approaches provide a fixed RCC, the active approaches can locate it arbitrary.

  5. Force control for PiH • The desired force is calculated from the measured force signal. • The end-effector velocity is calculated Force selection Resolved velocity Velocity selection measured - desired Force gain matrix = stiffness matrix

  6. PiH & DMPs Learning: DMPs In this paper Discrete DMPs are used • Encodes control policies for discrete point to point movements. • DMPs are based on systems of 2nd ode (Ijspeert et al., 2002a, 2002b). • Advantages: the ability to deal with perturbations and to include feedback terms. • Feedback terms can be added to change the timing (Schaal et al., 2007) and/or avoid some areas of the workspace. • The training movement must come to a full stop at the end of the demonstration if the robot is to stay at the attractor point after tT.

  7. PiH & DMPs Learning: Approach Phase • Trajectory encoding by imitation. • Demonstrated trajectory measured using KUKA LWR with gravity compensation

  8. PiH & DMPs Learning: Detection of Contact • Goal configuration is used to initialize DMPs (translation and rotation) • Monitor the forces during DMP execution • Stop if contact established • If contact is not established at the end of trajectory execution • Generate downward motion using hybrid control

  9. PiH & DMPs Learning: Phase Stopping

  10. PiH & DMPs Learning • The peg is in contact with the surface outside the hole • The peg is inside the hole with only one contact at the edge of the hole • The peg is in contact with two points • The peg is in contact with the hole edge • The peg is inside the hole with two contacts • The peg is inside the hole with only one contact 1 2 3 4 5 6 During the trajectory execution using DMP, the actual trajectory was modified according to the eventual force error. Slowing down feedback automatically assures that the DMP commanded motion slows down or stops whenever the peg gets stuck in the hole (or whenever the forces are exceeding the permitted value). When the robot exerts a downward force, each case Described on the left is changed to the case (3) or (5), eventually

  11. PiH & DMPs Learning: Search Phase & Algorithms • Generates new goal positions on the surface. • Movement is generated by linear DMPs (without non-linear part). • Hybrid control (force in z direction). • Monitor changes in forces and height.

  12. PiH & DMPs Learning: Insertion Phase • force control • Possible to learn the resulting motion as a DMP by incremental learning.

  13. Experimental results • PiH motion template was obtained from multiple human demonstrations using kinaesthetic guidance. • Cranfield Benchmark is used. It constitutes a standardized assembly task in robotics. • A circular peg has been used for testing in this Experiment.

  14. Change in z (m) Experimental results Force error in z (N) Force in z (N) Time

  15. Conclusion • In this paper, the PiH problem has been solved using DMPs. • DMPs with force feedback, in conjunction with hybrid trajectory and force control have been tested as a candidate solution for the PiH task. • Experiments have been done using the ROBWORK simulation and LWR real robot.

  16. Thank you

More Related