1 / 17

PROGRESS ON EMOTION RECOGNITION

PROGRESS ON EMOTION RECOGNITION. J G Taylor & N Fragopanagos King’s College London. KCL WORK IN ERMIS. Analysis of emotion v cognition in human brain ( →simulations of emotion/attention paradigms) → emotion recognition architecture ANNA

adonia
Download Presentation

PROGRESS ON EMOTION RECOGNITION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PROGRESS ON EMOTION RECOGNITION J G Taylor & N Fragopanagos King’s College London

  2. KCL WORK IN ERMIS • Analysis of emotion v cognition in human brain (→simulations of emotion/attention paradigms) • → emotion recognition architecture ANNA • ANNA hidden layer = emotion state, + feedback control for attention (= IMC) • Learning laws for ANNA developed • ANNA fuses all modalities or only one • HUMAINE: WP3 +WP4

  3. Parietal ACG SFG Thal NBM A SC BASIC BRAIN EMOTION CIRCUIT • Valence in amygdala & OBFC • Attention in parietal & PFC • Interaction in ACG

  4. SIMPLIFIED ARCHITECTURE OF EMOTIONAL/COGNITIVE PROCESSING IN THE BRAIN:

  5. DETAILED ARCHITECTURE FOR FACES CLASSIFICATION gender

  6. BASIC ERMIS EMOTION RECOGNITION ARCHITECTURE: Feature vector Inputs: Attention control system: Output as recognised emotional state → ↑ Emotion state as hidden layer

  7. ANNA:Assume linear output:Hidden layer response:IMC node response:Then solve self-consistent equations for (y, z) for each training input by relaxation

  8. NATURE OF ANNA • Handles both unimodal and multi-modal data (input vector x of arbitrary dimension, not too large) • Needs consistent input and output data {x(t), OUT(t)}, with t specified for both x & OUT=(activat, evaluat) • Uses SALAS date-base (450 tunes) from QUB (Roddie/Ellie/Cate)

  9. UNIMODAL RESULTS • Can use numerous representations of emotion: extreme, continuous in n dimensions, … • ANNA → FEELTRACE output (continuous 2-D) • Trained on unimodal for prosody • First look at word content

  10. Text Post-Processing Module • Prof. Whissell compiled ‘Dictionary of Affect in Language (DAL)’ • Mapping of ~9000 words → (activation-evaluation), based on students’ assessment • Take words from meaningful segments obtained by pause detection → (activation-evaluation) space • But humans use context to assign emotional content to words

  11. Text Post-Processing Module (SALAS data) Table 1. Quadrant match for normal text (full DAL). Participant P1 P2 P3 P4 P9 P12 All Quadrant match (%) 21.4 12.5 21.4 30.4 25.0 19.6 16.1 Table 2. Quadrant match for scrambled text (full DAL). Participant P5 P6 P7 P8 P10 P11 All Quadrant match (%) 07.1 23.2 25.0 32.1 23.2 21.4 21.4 Table 3. Standard deviation of participants’ assessments for normal and scrambled text (average over all passages assessed). Normal Scrambled Evaluation 1.24 1.45 Activation 1.55 1.73 Table 4. Quadrant match averaged over participants’ groups for normal text and scrambled text when threshold for DAL range* is varied. Threshold 0.0 0.25 0.5 0.75 Normal text 16.1 16.0 12.5 16.4 Scrambled text 21.4 21.4 19.6 21.8 *The higher the threshold the higher emotionally rated words are spotted only. Conclude: need further context/semantics

  12. Correlational analysis of ASSESS features • Correlational analysis between ~450 ASSESS features and FeelTrace => • ASSESS features correlate more highly with activation • Similar top ranking features for 3 out of 4 FeelTracers (but still differences) • Different top ranking features for different SALAS subjects ->Is there a male/female trend? Difficult to say - insufficient data

  13. ANNA on top correlated ASSESS features • Quadrant match using top 10 activation features + top 10 evaluation features and activation – evaluation output space:

  14. ANNA on top correlated ASSESS features • Half-plane match using top 10 activation features and activation only output space:

  15. PRESENT SITUATION OF ANNA: MULTIMODAL • Time-stamped data now becoming available for lexical (ILSP) & face streams (NTUA) • Expect to have results in about 1 month for recognition for fused modalities (faces/prosody/words)

  16. CONCLUSIONS • UNIMODAL: ANNA on prosody OK (especially activation) • MULTIMODAL: Soon to be done • On semi-realistic data (SALAS QUB) • Future work: 1) analysis of detailed results 2) insert temporality in ANNA

  17. QUESTIONS • How to handle variations across experiencers and across FEELTRACERS? • How to incorporate expert knowledge? • How combine recognition across models? • Coding of emotions: as dimensional reps or as dissociated states (sad AMYG v angry OBFC)? • Nature of emotions as goal/reward assessment (frustration → anger; impossible →sadness, etc: brain-based)?

More Related