1 / 22

Affect responsive photo frame

Affect responsive photo frame. Hamdi Dibeklio ğ lu Ilkka Kosunen Marcos Ortega Albert Ali Salah Petr Zuzánek . eNTERFACE ’10 Amsterdam, July -August 2010. Goal of the Project. Responsive photograph frame User interaction leads to different responses Modules of the project

eshe
Download Presentation

Affect responsive photo frame

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Affectresponsivephotoframe Hamdi Dibeklioğlu Ilkka Kosunen Marcos Ortega Albert Ali Salah Petr Zuzánek eNTERFACE ’10 Amsterdam, July-August 2010

  2. Goal of the Project • Responsive photograph frame • User interaction leads to different responses • Modules of the project • Video segmentation module • Dictionary of responses • Behaviour understanding • Offline: Labelling dictionary • Online: Cluster user action • System logic • Linking user actions to responses

  3. System Design

  4. Module 1: Offline Segmentation • 5 video recordings (~1.5-2 min.) • Same individual • Different actions and expressions • Manual annotation of videos • ANVIL tool • Annotated by different individuals • Automatic segmentation • Segmentation based on actions • Optical flow: amount of activity over time

  5. Optical flow calculation • Activity calculation based on feature tracking over the sequence • Feature detection • Shi-Tomasi corner detection algorithm • Feature tracking • Lucas-Kanade feature tracking algorithm • Pyramidal implementation (Bouguet)

  6. Optical Flow Computation • Movement analysis

  7. Optical flow based segmenting • To find a calm segment, just search for long period of frames with calculated optical flow below some treshold (we used 40% of average optical flow calculated from all frames) • To find an active segment, search for frames with lot of optical flow, and then search forward and backward for the calm segments.

  8. Smoothing Optical Flow Data

  9. Manual vs. Automatic Segmentation

  10. Calm Segment Example

  11. Active Segment Example

  12. Module 2: Real-time Feature Analysis • Face detection activates the system • Viola-Jones face detector • User’s behaviour can be monitored via • Face detection • Eye detection • Valenti et al., isophote-curves based eye detection • Optical flow energy • OpenCV Lucas-Kanade algorithm • Colour features • Facial feature analysis • The eMotion system

  13. User Tracking • Face and Eye detection: EyeAPI

  14. Facial feature tracking * R. Valenti, N. Sebe, and T. Gevers. Facial expression recognition: A fully integrated approach. In ICIAPW, pages 125–130, 2007. Face model: 16 surface patches embedded in Bezier volumes. Piecewise Bezier Volume Deformation (PBVD) tracker is used to trace the motion of the facial features.

  15. Expression Classification 12 motion units Naive Bayes (NB) classifier forcategorizing expressions NB Advantage: the posterior probabilities allow a soft output of the system

  16. Average Motion Units Happiness Surprise Anger Disgust Fear Sadness

  17. Real-time Expression Analysis

  18. Module 3: System Response • Linking user actions and system responses • An action queue is maintained • Different user inputs (transitions) lead to different responses (states) • The responses (segments) are ‘unlocked’ one by one

  19. Module 3: System Response Before learning After learning

  20. Module 4: Interface • Currently two external programs are employed: • SplitCam • eMotion • Glyphs are used to provide feedback to the user • Glyph brightness is related to distance to activation • Once a glyph is activated, the same user activity will elicit the same response • Each user can have different behaviours activating glyphs

  21. Demo of the system

  22. Future Work • Work on the learning module • Testing the segmentation parameters • The dual frame mode • Speeding up the system • Wizard of Oz study • Usability studies • SEMAINE integration?

More Related