1 / 33

On-line use of physiological measures and facial expressions; towards a companion

On-line use of physiological measures and facial expressions; towards a companion. Ben Mulder 1 & Dick de Waard 1,2 University of Groningen Delft University of Technology. COMPANION Co-operative Observing MMI for Personalised Assistance and Narration as Induced by Operator Needs

marek
Download Presentation

On-line use of physiological measures and facial expressions; towards a companion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On-line use of physiological measures and facial expressions; towards a companion Ben Mulder1 & Dick de Waard1,2 University of Groningen Delft Universityof Technology

  2. COMPANION Co-operative Observing MMI for Personalised Assistance and Narration as Induced by Operator Needs Senter IOP project The eye as entrance to the world

  3. The Operator Status Model uses physiological and performance data to estimate operator status and the focus of attention Outputs of the OSM are used to adapt the interface or to prioritize a message

  4. Operator Status Model • Information sources: task performance, physiological state, attention at the right place • State vs status: short term vs long term • Stability: reliability vs sensitivity to changes • Individual patterns • Criteria

  5. Four example lines of study • AWAKE: eye lid closure: driver fatigue • Eye derived measures: task related patterns • Cardiovascular indices: state changes • Facereader: emotional expressions

  6. AWAKE: On-line detection of driver fatigue • Absolute criteria for impaired driving (e.g. amount of swerving in a lane) • Relative changes in performance (e.g. increase in swerving) • > 1 parameter (one measure does not tell the full story) • Individual tuning (individual differences are large) • …implied training of neural networks

  7. AWAKE goals project: AWAKE • To develop an unobtrusive, reliable system, which will monitor the driver and will detect hypovigilance in real time, based on multiple measuring parameters. • Operating in all motorway scenarios.

  8. Driver behaviour evaluation • Input channels: • Eyelid sensors (PERCLOS: Knipling) • Vehicle sensors like steeringwheel position, Lateral position (camera), and Speed • “Traffic Risk estimator” (front radar, GPS, cameras, etc.) to value discrete risks and the environment

  9. Eye lid sensor (Perclos -Knipling) • Detection rate : >95% for drivers without glasses and >65% for drivers with glasses • False alarm rate : 1% for drivers without and with glasses • Miss rate : <5% for drivers without glasses and <34% for drivers with glasses siemens automotive v 12 Vp 9:59:59:49 O:OO:OO:01 10 video output light video E E C U gain/shutter light digital output EYELID SENSOR

  10. Classification of driver state • 3 level hypovigilance diagnosis each minute : • Driver is vigilant meaning her/his driving behaviour is normal and she/he is in an awake physiological state. • Driver is slightly hypovigilant meaning there is a degradation of his/her driving behaviour or she/he shows first signs of drowsiness. • Driver is hypovigilantmeaning he is driving in an unsafe way or she/he is drowsy/sleepy.

  11. 1st experiment TUDelft-TNO-CRF • 1st Experiment with compact car (Fiat Stilo) • Together with TNO & CRF • 10 Participants;2 test rides on motorways: -- 1. “learning phase” (vigilant, 1.5 hours) -- 2. experimental (fatigued, 1.5 hours)Ride 2 was after a night shift (required to remain awake all night)

  12. First results • Eye Lid Sensor worked : 6 / 10 [successful /total rides] • High Karolinska Sleepiness Scores during experimental ride [self-reported drowsiness] • Drowsiness warnings (participants judgement): correct in 63%, exactly on time 50% • On basis of results of this experiment technical performance was improved and functioned better in later pilots (luxury car and lorry)

  13. Eye derived measures • NLR: tracking task • blinks and pupil diameter • RuG: Ambulance Dispatcher Simulation: • fixations, dwell times & entropy • CML: small flight simulator • fixations, dwell times & heart rate patterns

  14. Ambulance Dispatchers Simulation • Simulation of patient transport: • A1: urgent transport, e.g. accidents • A3: non-urgent transport, ordered, to and from hospitals • Communication is restricted, via screen messages; decision structure is quite well simulated

  15. Fixations, dwell time & entropy • Dwell time = summed fixation times in an area of interest (AOI) without leaving this area • Entropy: • How random is the eye scan pattern: inverse of information contents • Areas of interest (AOI) • Transition matrix • Transition probabilities between AOIs

  16. Fixation times per AOI

  17. Dwell times per AOI

  18. Entropy per AOI

  19. Flight simulation: fixation duration, heartrate & HRV • Small simulator flight • scenario with increasing complexity, 6 phases • total duration: 40 minutes

  20. Cardiovascular state changes • Measures: • BP: systolic and diastolic • Heart rate (or IBI = inverse, interbeat interval time) • BRS: baroreflex sensitivity: reflecting sensitivity of short term blood pressure regulation • Heart rate variability (HRV), blood pressure variability (BPV)

  21. Ambulance dispatchers task • Long lasting task performance: 2 hours • 2 sessions (differences here not relevant) • Periods of 15 minutes, light and heavy work load • Can these periods be recognized in cardiovascular patterns, between trends ? • Which variables are most characteristic ? • Can state changes be recognized at an individual level ?

  22.  implicit  explicit  implicit  explicit  implicit  explicit  implicit  explicit

  23. implicit  explicit  implicit  explicit  implicit  explicit implicit  explicit

  24. Summary of results • Time on task: SBP, DBP, IBI, BRS  HRV  ; • Heavy vs low work load periods: HRV  • Time on task effects are predominant • Heavy vs low work load effects only visible in variability measures (HRV)

  25. Classification of task periods • Is it possible to distinguish between high and low levels of workload within individuals? • Classification of workload levels with a multiple regression model using all cardiovascular measures available • 5 minute segments, partly overlapping • Half of data in training-set, half in test-set • Use ‘voting neighbors’ principle

  26. Classification results, at an individual basis

  27. Conclusions on state changes • Classification at an individual level is reasonable, despite the limited differences in work load between the conditions • Is it possible to use the regression model on- line? • Would it be helpful to use a baroreflex control model

  28. Emotion and sports

  29. FaceReader • Is it possible for a computer program to recognize human emotions ? • Facereader: development of Vicarvision, Amsterdam • Basic emotions (6), according to Ekman • Basic principles: detection of position changes in a large number of facial points; comparison with a ‘trained database’ • Webcam, online, or off-line analysis: picture-by-picture

  30. Conclusions • We mainly studied state changes; for ‘probing experience’ short term changes are probably more relevant • Main problems: • Stability of individual patterns • Finding criteria on an individual basis • Possible solutions: • Using multiple sources and measures • Model based approach; e.g. Baroreflex model • OSM: general approach; finding an adequate decision model • Facereadercould be interesting in this field

More Related