1 / 13

Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments

Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments. Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments. Simone Hämmerle, Matthias Wimmer , Bernd Radig, Michael Beetz Technische Universität München – Informatik IX.

tamika
Download Presentation

Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments Simone Hämmerle, Matthias Wimmer, Bernd Radig, Michael Beetz Technische Universität München – Informatik IX

  2. SIP via sensors • Situation detection: • information about persons: name, location, focus of attention, posture, motion,… • Individualized settings: • desktop, avatar, input settings (gestures, voice commands,…) • Personalized settings: • user’s role, right management,… • SIP detection using sensors • more comprehensive SIP information • more intuitive HCI who when where what

  3. Our Test Bed Sensors: cameras, microphones, laser-range-sensors Actuators: monitor, speaker, video-wall Scenarios: • person localization • automatic login • meeting reminder • individualized gesture interaction

  4. Video

  5. person detection OpenCV (Haar-Face-Detector) person recognition OpenCV (Hidden Markov Models) person tracking developed at TUM laser-scanner based multiple hypothesis tracking,… gesture recognition developed at TUM motion templates, multiple classifiers,… mimic recognition developed at TUM point distribution model, optical flow,… Techniques (Computer Vision)

  6. Techniques (others) • natural language input • Java Sphinx 4 (origin CMU, now open source) • phonemes are already trained • we defined the words ( = concatenation of phonemes) • we defined the grammar ( = allowed sentences) • natural language output • provides the user with audio information • user can be mobile • FreeTTS 1.2 (sourceforge)

  7. Software architecture multi agent framework Dispatcher

  8. Conclusion • Advantages using sensors • additional and more exact context knowledge • unobtrusive system • Multi agent framework • distributed and scalable system • simply extensible to further scenarios • Overall semantic • semantic agent communication • central aggregation of semantic context knowledge • Leads to • more comprehensive SIP information • seamless integration of SIP information • intuitive HCI

  9. Thank you!

  10. Setup & Benefit • sensors for detection of SIP context: • cameras • microphones • laser-range-sensors • pressure-sensors, … • sensors provide knowledge about the SIP context • situation dependant services • intuitive HCI (human computer interface) • application scenarios: • support in meetings and presentations • intelligent House • external robot control

  11. Our Test Bed Sensors: Cameras, Microphones, Laser-Range-Sensors Actuators: Monitor, Speaker, Video-Wall Scenarios: • automatic login • meeting reminder • individualized gesture interaction • intuitive robot control • person localization

  12. person recognition (Bild) gesture recognition (Bild) Sensors

  13. Knowledgebase • Web Ontology Language (W3C)

More Related