1 / 25

Multicomponent analysis of emotional experience

Multicomponent analysis of emotional experience. M. Mortillaro University of Milan - Bicocca. emotions as multicomponent processes . Reactions to goal-relevant changes in the environment according to different organismic subsystems, that answer functions reflected in five main components

barton
Download Presentation

Multicomponent analysis of emotional experience

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multicomponent analysis of emotional experience M. Mortillaro University of Milan - Bicocca

  2. emotions as multicomponent processes Reactions to goal-relevant changes in the environment according to different organismic subsystems, that answer functions reflected in five main components • cognitive appraisal component • subjective feeling component • physiological component • motor expression component • motivational component (Scherer, 1984, 1987, 2000)

  3. Most of traditional studies considered only one modality or one component. Authors showed how difficult is the linkage between emotional states and one single modality In order to overcome these difficulties, emotions should be addressed multimodally, in the sense that signs may appear at the same time in different channels

  4. Multimodal • Multicomponent Ideally research should include all the components at the same time (through physiological measures, voice, gestures, facial expressions, brain activity, self-reports...)

  5. Difficulties mainly concern how to build an empirical procedure to obtain all these measures in a reliable way (database) GEMEP Define one procedure to have multimodal data to be analyzed

  6. objectives • Multimodally investigate emotions within a unique research procedure • Perform cross-component investigation to support the conceptualization of emotion as a whole made by different components • Suggest multicomponential investigation as an effective way to improve automatic emotion recognition

  7. how to obtain multimodal emotional data • which components can be detected • how to synchronize measures • how to integrate them

  8. how to obtain multimodal data Combine Velten procedure with standard paradigm procedure. Ten narrations (scenarios), each characterized by an univocal emotional episode with a part in first person speech, written in order to describe a situation that can be appraised as joy, anger, etc. Validation Cultural grounded labels of emotions (script)

  9. Participants were asked to read aloud trying to identify with the main character (contextualized acting). Velten procedure adding a contextual dimension through narration Similarity to the work running on in Geneva

  10. Controlled “in laboratory” situation: to have more reliable values for physiological parameters Naive participant: non professional emotional expression

  11. components detected Also deals with features to extract We acquired from the same sequence, simultaneously • Physiological Heart Rate, HR; Skin conductance, SC; Respiration Rate, RR; Respiration Amplitude, RA, Finger Blood Amplitude, BA; Electromyography of the extensor muscle of the forearm, EMG); Finger temperature, Temp; PROCOMP (Thought Technology Ltd.) • Expressive Facial expressionsFACS and Theme (Noldus)* Vocal acoustic parameters time (total and partial duration, pause, speech and articulation rate), fundamental frequency (F0) and intensity (mean, sd, range, min and max); CSL (Kay Elemetric Ltd.) *Facial expressions are not considered within results herein reported

  12. setting

  13. how to synchronize those measures • Texts including a standard utterance. Both vocal, facial and physiological measures considered are extracted during its speech • timelines can be overlapped • standard utterance lasts between 1 and 1.5 seconds according to the speaker and to the emotion. This short duration allows to consider mean values, but it is also possible to consider signals contour

  14. NARRATION Scenario description First person speech standard utterance

  15. how to integrate the information Measures belonging to both components should be jointly analyzed statistically. Correlation, patterning, regression For emotion recognition • Discriminant analysis • Advanced classification algorithms (decision tree, k-nearest neighbour, bayesian networks) within WEKA environment

  16. procedure • 34 participants x 10 emotions X 50 measures • introduced in a laboratory setting, briefed about the sensors and gave consent before being cabled • baseline for physiological parameters was measured (Berntson, Uchino, Cacioppo, 1994) • narrations presented in randomized order • read a first time silently figuring out the situations described, then reading aloud in a natural and spontaneous way trying to identify themselves with the character

  17. preliminary analysis • Physiological measures: ANOVA statistics showed significant main effect of emotion in mean values of different measures (SC, Respiration, BA). • Vocal features: ANOVA statistics showed significant main effect of emotion in every measure considered (except Pause). • Post hoc analysis showed results mainly consistent with scientific literature, more for the so-called primary emotions included. More problems with physiological measures.

  18. cross-component correlations We found slight significant cross-component correlations of measures (acquired in the same time). It suggests that different modalities jointly work to form the emotional experience, showing correspondence in indices variations, but each of them keeping a specific contribution In particular, concerning correlations among physiological measures and vocal features, articulation rate, variations of F0 and Intensity are clearly reflected in respiration measures. Furthermore, F0 and Intensity correlate with Skin Conductance: these results can be read as reflecting the physiological arousal level Genova, September 2006 – Humaine Summer School – Workshop on synchronization

  19. discriminant analysis • Running discriminant analysis for the ten emotions only on physiological measures it is indicated an overall percentage of 28.4% of correctly reclassified cases • Including only vocal measures it is obtained an overall percentage of 30.1% • When all these measures are used at the same time, discriminant analysis outcomes a percentage of an overall correct classification that raises to 38.8% (10% expected by chance). Furthermore, considering 8 out of 10 emotions, the overall correct recognition percentage increases to 47.0% Genova, September 2006 – Humaine Summer School – Workshop on synchronization

  20. Advanced classification algorithms are currently being trained on the database Decision tree Bayesian networks K-nearest-neighbour Genova, September 2006 – Humaine Summer School – Workshop on synchronization

  21. conclusion • Preliminary data supported a multicomponent perspective: most of the measures seem clearly influenced by emotional states,using contextualized acting method • Correlations suggest that considering more components at the same time can provide a clearer definition of emotional experience. Further analyses are needed. Genova, September 2006 – Humaine Summer School – Workshop on synchronization

  22. limits • Contextualized acting should be empowered in order to obtain more wide physiological effects (longer narrations, more detailed character, assessment of transportation tendency of participants) • Synchronization is still hand-made • Facial expressions are influenced by reading task • Wider sample of participants is needed for classification algorithms Genova, September 2006 – Humaine Summer School – Workshop on synchronization

  23. Future work • testing of learning algorithms • integration of facial expressions analysis • contour analysis

  24. Multicomponent approach • Questions (procedure, db, features, synchronization, analysis…) • Attempt

  25. Thank you marcello.mortillaro@unimib.it

More Related