1 / 1

Jürgen Sturm, Christian Plagemann, and Wolfram Burgard

Unsupervised Body Scheme Learning through Self-Perception. Jürgen Sturm, Christian Plagemann, and Wolfram Burgard.

Download Presentation

Jürgen Sturm, Christian Plagemann, and Wolfram Burgard

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unsupervised Body Scheme Learning through Self-Perception Jürgen Sturm, Christian Plagemann, and Wolfram Burgard Abstract - In this paper, we present an approach allowing a robot to learn a generative model of its own physical body from scratch using self-perception with a single monocular camera. Our approach yields a compact Bayesian network for the robot's kinematic structure including the forward and inverse models relating action commands and body pose. We propose to simultaneously learn local action models for all pairs of perceivable body parts from data generated through random “motor babbling.'' From this repertoire of local models, we construct a Bayesian network for the full system using the pose prediction accuracy on a separate cross validation data set as the criterion for model selection. The resulting model can be used to predict the body pose when no perception is available and allows for gradient-based posture control. In experiments with real and simulated manipulator arms, we show that our system is able to quickly learn compact and accurate models and to robustly deal with noisy observations. Experimental setup: The robot issues random action commands (“motor babbling”) to its joints and perceives the resulting movements of its body parts using a monocular camera. From this self-perception, it learns a compact Bayesian network that it can then use both for prediction and control. • Motivation • Kinematic models are subject to change • Wear-and-tear (wheel diameter, air pressure) • Re-configurable robots • Tool use Solutions • Classical: Engineering and Calibration • Our approach: Sensor-motor learning Related Work • Self-calibration [Roy and Thrun, 1999] • Cross-modal maps [Yoshikawa et al., 2004] • Structure learning [Dearden and Demiris, 2005] • Neurophysiology: • Adaptive body schemata [Maravita and Iriki, 2004] • Mirror neurons [Holmes and Spence, 2004] • Approach • 1. Learning the kinematic structure • Decomposition into local models • Model selection problem, upper bound: • Heuristic search • Maximize prediction accuracy Before learning: Fully-connected After learning: Kinematic chain Predictive local model • 2. Learning the forward and inverse models with noisy perception (2-DOF real robot) Learn local models for the Bayesian network • Gaussian Processes (GPs) as regression functions Experiment 1: Prediction • Local models learn faster than full model • High accuracy Experiment 2: Control • Compute gradients • Posture control Forward Model (Prediction) Inverse Model (Control) • 3. Dealing with partial observability (7-DOF simulated robot) Experiment 3: • Hidden body part • Higher-order local model After first training example After 10 training examples • Future work • Body part tracker using natural visual features [Yan and Pollefeys, 2006] • Identify the physical/geometrical structure of the robot, e.g. for trajectory planning and obstacle avoidance • Dynamic adaption of Bayesian network while tool-use

More Related