240 likes | 360 Views
This research explores the intricate mechanisms of hand-eye coordination in humans and how they can inform the development of autonomous robotic systems capable of reach-to-grasp movements. The study integrates visual information for both feedforward and feedback control strategies, emphasizing the use of real-time visual data to enhance robotic motion planning and execution. By analyzing human trajectories and refining robotic models, we aim to improve the performance of robotic systems, narrowing the gap between human dexterity and robotic capabilities.
E N D
Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck
Context MODEL of Hand-Eye Coordination ANALYSIS of human reaching movements SYNTHESIS of a robotic system Special Research Program “Sensorimotor” • C1: Human and Robotic Hand-Eye Coordination • Neurological Clinic (Großhadern), LMU München • Institute for Real-Time Computer Systems, TU München
The Question is ... control strategy representation catching reaching How to use whichvisualinformation for motioncontrol?
State-of-the-art Robotics Look-then-move: (visual feedforward control) Visual Servoing: (visual feedback control) • easy integration with path planning • only little visual information needed • sensitive against model errors • model errors can be compensated • convergence not assured • high-rate vision needed Impressive results ... but nowhere near human performance!
The Human Example Separately controlled hand transport: • almost straight path • bell-shaped velocity profile Experiments with target jump: • smooth on-line correction of the trajectory Experiments with prism glasses: • on-line correction using visual feedback • off-line recalibration of internal models • Use of visual information in spatial representation • Combination of visual feedforward and feedback ... but how ?
Hand-Eye System position target & hand Image Interpretation Motion Planning object model object model Models Hand-Eye System & Objects features trajectory Image Processing Robot Control sensor model arm model images commands Robot
The Robot: MinERVA CCD cameras pan-tilt head manipulator with 6 joints
Robot Vision Target corresponding points 3D Bin. Stereo Hand corresponding points
Model Parameters HALCON HALCON Calibration • Arm: • geometry, kinematics • 3 parameters • Arm-Head Relation: • coordinate transformation • 3 parameters • Head-Camera Relations: • coordinate transformations • 4 parameters • Cameras: • pinhole camera model • 4 parameters (+ rad. distortion) manufacturer measuring tape
Use of Visual Feedback corr mean max 0 8.9cm 20cm 1 Hz 0.4cm 1cm
Summary • New control strategy for hand-eye coordination • Extension of a biological model • Unification of look-then-move & visual servoing • Flexible,economic use of visual information • Validation in simulation • Implementation on a real hand-eye system