1 / 27

19 February 2008

19 February 2008. Facial Expression for Human-Robot Interaction – A prototype. http://robotics.ece.auckland.ac.nz. Matthias Wimmer Technische Universitat M ü nchen Bruce MacDonald, Dinuka Jayamuni, and Arpit Yadav Department of Electrical and Computer Engineering, Auckland. Outline.

chakra
Download Presentation

19 February 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 19 February 2008 Facial Expression for Human-Robot Interaction – A prototype http://robotics.ece.auckland.ac.nz Matthias Wimmer Technische Universitat München Bruce MacDonald, Dinuka Jayamuni, and Arpit Yadav Department of Electrical and Computer Engineering, Auckland

  2. Outline Motivation Background Facial expression recognition method Results on a data set Results with a robot (the paper contribution) Conclusions

  3. Motivation: Goal Our Robotics group goals: To create mobile robotic assistants for humans To make robots easier to customize and to program by end users To enhance interactions between robots and humans Applications: healthcare, eg aged care Applications: agriculture (eg Ian's previous presentation) (Lab visit this afternoon)‏ Robot face

  4. Motivation: robots in human spaces Increasingly, robots live in human spaces and interact closely InTouch remote doctor

  5. Motivation: close interactions RI-MAN http://www.bmc.riken.jp/~RI-MAN/index_us.html

  6. Motivation: different types of robot Robots have many forms; how do people react? Pyxis HelpMate SP Robotic Courier System Delta Regional Medical Centre, Greenville, Mississippi

  7. Motovation: different robot behaviour AIBO (Sony)‏ Paro the therapeutic baby seal robot companion http://www.aist.go.jp/aist_e/latest_research/2004/20041208_2/20041208_2.html

  8. Motivation: supporting the emotion dimension Robots must give support with psychological dimensions home and hospital help therapy companionship We must understand/design the psychology of the exchange Emotions play a significant role Robots must respond to and display emotions Emotions support cognition Robots must have emotional intelligence Eg during robot assisted learning Eg security screening robots Humans’ anxiety can be reduced if a robot responds well [Rani et al, 2006]

  9. Motivation: functionality of emotion response Not just to be “nice”; the emotion dimension is essential to effective robot functionality [Breazeal]

  10. Motivation: robots must distinguish human emotional state However, recognition of human emotions is not straightforward Outward expression versus internal mood states People smile when happy AND they are interacting with humans Olympic medalists don’t smile until the presenter appears (eg 1948 football team) Ten pin bowlers smile when they turn back to their friends

  11. Motivation: deciphering human emotions • Self-reports are more accurate than observer ratings • Current research attempts to decipher human emotions • facial expressions • speech expression • heart rate, skin temperature, skin conductivity www.cortechsolutions.com

  12. Motivation: Our focus is on facial expressions Despite the limitations, we focus on facial expression interpretation from visual information. Portable, contactless Needs no special nor additional sensors Similar to humans' interpretation of emotions (which is by vision and speech)‏ No interference with normal HRI Asimo www.euron.org

  13. Background • Six universal facial expressions (Ekman et al.) • Laughing, surprised, afraid, disgusted, sad, angry • Cohn-Kanade-Facial-Expression database (488 sequences, 97 people) • Performed • Exaggerated • Determined by • Shape • Muscle motion

  14. Background: Why are they difficult to estimate? • Different faces look different • Hair, beard, skin-color, … • Different facial poses • Only slight muscle activity

  15. Background • Typical FER process [Pantic & Rothkrantz, 2000]

  16. Background: Challenges 1. Face detection and 2. feature extraction challenges: • Varying shape, colour, texture, feature location, hair • Spectacles, hats • Lighting conditions including shadows 3. Facial expression classification challenges: • Machine learning

  17. Background: related work • Cohen et al: 3D wireframe with 16 surface patches • Bezier volume parameters for patches • Bayesian network classifiers • HMMs model muscle activity over time • Bartlett et al: Gabor filters using AdaBoost, Support Vector • 93% accuracy on Cohn-Kanade DB • Is tuned to DB

  18. Background: challenges for robots • Less constrained face pose and distance from camera • Human may not be facing the robot • Human may be moving • More difficulty in controlling lighting • Robots move away! • Real time result is needed (since the robot moves)

  19. Facial expression recognition (FER) methodMatt’s model based approach

  20. FER method • Cootes et al statistics based deformable model (134 points) • Translation, scaling, rotation • Vector bof 17 face configuration parameters • Rotate head b1, open mouth b3, change gaze direction b10

  21. FER method: Model-based image interpretation • The model The model contains a parameter vector that represents the model’s configuration. • The objective functionCalculates a value that indicates how accurately a parameterized model matches an image. • The fitting algorithmSearches for the model parameters that describe the image best, i.e. it minimizes the objective function.

  22. FER method • Two step process for skin colour: see [Wimmer et al, 2006] • Viola & Jones technique detects a rectangle around the face • Derive affine transformation parameters of the face model • Estimate b parameters • Viola & Jones repeated • Features are learned to localize face features • Objective function compares an image to a model • Fitting algorithm searches for a good model

  23. FER method: learned objective function • Reduce manual processing requirements by learning the objective function [Wimmer et al, 2007a & 2007b] • Fitting method: hill-climbing

  24. FER method Facial feature extraction: • Structural (configuration b) and temporal features (2 secs) Expression classification • Binary decision tree classifier is trained on 2/3 of data set

  25. Results on a dataset Happiness and fear have similar muscle activity around the mouth, hence the confusion between them.

  26. Results on a robot • B21r robot • Some controlled lighting • Human about 1m away • 120 readings of three facial expressions • 12 frames a second possible • Tests at 1 frame per second

  27. Conclusions Robots must respond to human emotional states Model based FER technique (Wimmer) 70% accuracy on Cohn-Kanade data set (6 expressions) 67% accuracy on a B21r robot (3 expressions) Future work: better FER is needed Improved techniques Better integration with robot software Improve accuracy by fusing vital signs measurements

More Related