Loading in 2 Seconds...
Loading in 2 Seconds...
Perception and Control in Humanoid Robotics using Vision Using position-based visual servoing, Metalman has the ability to perform simple manipulation tasks; the sequence below shows Metalman autonomously locating and stacking three randomly placed blocks. Future work will include
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Using position-based visual
servoing, Metalman has the ability
to perform simple manipulation
tasks; the sequence below shows
Metalman autonomously locating
and stacking three randomly placed
blocks. Future work will include
servoing two arms cooperatively to
perform even more complex tasks!
Supervisors: A/Prof Lindsay Kleeman
A/Prof R Andrew Russell
Imagine you had a domestic humanoid robot servant,
then consider what you would like it to do …
It quickly becomes clear that a practical domestic robot must
possess a basic ability to find and grasp objects in a dynamic,
cluttered environment (ie. your house!). To address this issue,
we have developed a self-calibrating, position-based visual
servoing framework. Metalman, the Monash
upper-torso humanoid robot, provides a
platform for this and other exciting
humanoid robot experiments.
3D hand pose
the relative position
This is the actual stereo view seen by
Metalman while tracking its hand
It’s a visual thing …
Visual servoing is a feedback control
technique using visual measurements
to robustly regulate the motion of a
robot. Metalman uses stereo cameras to
estimate the 3D pose (position and
orientation) of its hands, by observing
bright LEDs attached in a known pattern
and feeding the data into a Kalman tracking
filter. Other objects are similarly localized via attached
coloured markers. Depending on the desired action
(eg. grasp an object), Metalman uses this pose information
to generate actuating signals that drive the arm to the required
pose. Because Metalman continuously estimates the pose of
its hands, the system is completely self-calibrating.
LED markers on
the hand facilitate
to drive hand
in desired direction
Final hand pose
depends on the
Progress time indicated at top-right of each frame
Even robots get lonely! Metalman
must interact with humans to be
truly useful. The experiment
below demonstrates simple
interaction using motion cues: the
user taps on a random block, and
Metalman places a finger above
the selected object.
Where has all the data gone?
In a complex system such as Metalman, the
interaction of various components can generate unwanted dynamics such as
dead time delays. For instance, the graph below plots the position of the head
during a sinusoidal motion: the red line indicates joint encoder data, and the
blue line shows data from the cameras. The apparent 30 ms delay between
these devices can degrade
performance. In this work,
we develop simple matching
and prediction techniques
that allow Metalman to
and reduce these effects.
For more information, check the IRRC web page atwww.ecse.monash.edu.au/centres/IRRC
Electrical and Computer Systems Engineering
Postgraduate Student Research Forum 2001