1 / 16

ADAPT IST-2001-37173

ADAPT IST-2001-37173. Artificial Development Approach to Presence Technologies. 2 nd Review Meeting Munich, June 7-9 th , 2004. Consortium. Total cost: 1.335.141 € - Community funding: 469.000 € Project start date: October 1 st , 2002 Project duration: 36 months. Goal. We wish to…

Download Presentation

ADAPT IST-2001-37173

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ADAPTIST-2001-37173 Artificial Development Approach to Presence Technologies 2nd Review Meeting Munich, June 7-9th, 2004

  2. Consortium • Total cost: 1.335.141 € - Community funding: 469.000 € • Project start date: October 1st, 2002 • Project duration: 36 months

  3. Goal We wish to… …understand the process of building a coherent representation of visual, auditory, haptic, and kinesthetic sensations process  development process  dynamic representation Perhaps, once we “know” how it works, we can “ask” a machine to use this knowledge to elicit the sense of presence

  4. So, we are asking… How do we represent our world and, in particular, how do we represent the objects we interact with? Our primary mode of interaction with objects is through manipulation, that is, by grasping objects!

  5. Two-pronged approach • Study how infants do it • Implement a “similar” process in an artificial system Learning by doing: modeling  abstract principles  build new devices

  6. Scientific prospect • From the theoretical point of view: • Studying the nature of “representation” • From development: developmental path • Interacting with objects: multi-sensory representation, object affordances • Interpreting others/object interaction: imitation • From embodiment and morphology • Why do we need a body? How morphology influences/supports computation? • Computational architecture • How can an artificial system learn representations to support similar behaviors?

  7. Vision Touch Streri & Gentaz (2003, 2004) Reversible cross-modal transfer between hand and eyes in newborn infants Transfer of shape is not reversible

  8. 6-month-olds detect a violation of intermodality between face and voice A teleprompter device allows to delay independently voice or image

  9. Grasping: morphological computation Robot hand with: - elastic tendons - soft finger tips (developed by Hiroshi Yokoi, AI Lab, Univ. of Zurich and Univ. of Tokyo) Result: - control of grasping - simple “close” - details: taken care of by morphology/materials

  10. Video

  11. …how can the robot grasp an unknown object ? • Use a simple motor synergy to flex the fingers and close the hand • Exploit the intrinsic elasticity of the hand; the fingers bend and adapt to the shape of the object

  12. Result of clustering • 2D Self Organizing Map (100 neurons) • Input: proprioception (hand posture, touch sensors were not used) The SOM forms 7 classes (6 for the objects plus 1 for the no-object condition)

  13. Example: learning visual features • Only one modality (non-overlapping areas of visual field) guide feature extraction of each other • Learn invariant features from spatial context (it is well known that temporal context can be used for learning these features)

  14. Future work • Continue and complete ongoing experiments • Experiment on affordant vs. non-affordant use of objects (CNRS, UGDIST) • Investigation on cross-modal transfer in newborn infants (CNRS) • Experiments on the robot (UGDIST, UNIZH) • Learning affordances • Learning visuo-motor features by unsupervised learning • Feature extraction on videos showing mother-infant interaction

  15. Epirob04 Genoa – August 25-27, 2004 http://www.epigenetic-robotics.org Invited speakers: Luciano Fadiga Dept. of Biomedical Sciences, University of Ferrara, Italy Claes von Hofsten Dept. of Psychology, University of Upssala, Sweden Jürgen Konczak Human Sensorimotor Control Lab, University of Minnesota, USA Jacqueline Nadel CNRS, University Pierre & Marie Curie, Paris, France

More Related