1 / 23

EU funded FP7: Oct 11 – Sep 14

Co-evolution of Future AR Mobile Platforms. EU funded FP7: Oct 11 – Sep 14. Paul Chippendale, Bruno Kessler Foundation FBK, Italy. Move away from the Augmented Keyhole. User centric, not device centric. HMDs lock displays to the viewer . But what about handheld displays?.

fleta
Download Presentation

EU funded FP7: Oct 11 – Sep 14

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Co-evolution of Future AR Mobile Platforms EU funded FP7: Oct 11 – Sep 14 Paul Chippendale, Bruno Kessler Foundation FBK, Italy

  2. Move away from the Augmented Keyhole

  3. User centric, not device centric HMDs lock displays to the viewer But what about handheld displays?

  4. Device-World registration • What is the device’s real-world location? • Which direction is it pointing?

  5. Device-World registration • What is the device’s real-world location? GPS, Cell/WiFi tower triangulation (~10m)

  6. Device-World registration • Which direction is it pointing? Magnetometer, Gyros, Accelerometers (~5-20º) • Mems production variability • Sensors age • Soft/Hard iron influences vary across devices, environments and camera pose

  7. Is +/- 10m and +/- 20º sufficient for nailed-down AR?

  8. But what about hand-held AR? • Devices becomes an augmented window

  9. User-Device-World registration • What is the device’s real-world location? • Which direction is it pointing? • Where is the user with respect to the screen?

  10. Just wait for better AR devices! Surely if we wait sensor errors will disappear? Unlikely! • Sensor errors are tolerable for non-AR application, handset manufacturers focus on price, power and form-factor Can’t we just model the error in software? Not really! • Platform diversity and swift evolution make error modelling expensive and quickly obsolete

  11. So what can we do? • The AR comunity should work with handset manufacturers and make recomendations • Use computer vision to work with sensors

  12. VENTURI project... • Match AR requirements to platform • Efficiently exploit CPUs & GPUs • Improving sensor-camera fusion by creating a common clock (traditionally only audio/video considered) • Applying smart power management policies • Optimizing AR chain, by exploiting both on-board and cloud processing/storage

  13. Seeing the world • Improve device-world pose by: • Matching visual features to 3D models of the world • Matching camera feed to visual appearance of the world • Fusing camera and sensors for ambiguity reasoning and tracking • Use front facing camera to estimate user-device pose via face tracking

  14. Urban 3D Model matching • Use high resolution building models (e.g. laser scanned) and globally registered to geo-referenced coordinate system • Use 3D marker-less tracking to correlate distinctive features to 3D building models. Subsequent tracking using inertial sensors and visual optical flow

  15. Terrain 3D Model matching • Synthetic model of world rendered from Digital Elevation Models. Salient features from camera feed (depth discontinuities) matched to similar synthetic features.

  16. Appearance matching • Use approximate location to gather nearby images from the cloud • Exploit sensor data to provide a clue for orientation alignment • Computer vision algorithms match feature descriptors from the camera feed to similar features in the cloud images

  17. SLAM + Matching • Simultaneous Localization And Mapping - build a map of an unknown environment while at the same time navigating the environment using the map. • Mapped environment has no real-world scale nor absolute geo-coordinates. Exploit prior approaches to complete registration.

  18. Mobile context understanding • User/environment context estimation: • PDR enriched with vision • User activity modelling • Sensing geo-objects • Harvest/create geo-social content

  19. Context sensitive AR delivery • Inject AR data in a natural manner according to: • environment • occlusions • lighting and shadows • user activity • Exploit user and environment ‘context’ to select best delivery modality (text, graphics, audio, etc.), i.e. scalable/simplify-able audio-visual content

  20. User Interactions • Explore evolving AR delivery and interaction • In-air interfaces: device, hand and face tracking • 3D audio • Pico-projectionfor multi-user, social-AR • HMDs

  21. Prototypes One consolidated prototype at the end of each yearto be evaluated through Use-cases • Gaming - VeDi 1.0 • Blind assistant - VeDi 2.0 • Tourism - VeDi 3.0 Constraints relaxed

  22. VeDi 1.0 Objective: Stimulate software and hardware cross-partner integration and showcase state-of-the-art indoor AR registration Scenario: Multi-player, table-top AR Game resembling a miniature city. Players must accomplish a set of AR missions in the city, that adhere to physical constraints. Software: Sensor-aided marker-less 3D feature tracking. City geometrically reconstructed offline correctly occlusion handling and model registration. Hardware: Demo runs on experimental ST Ericsson prototype mobile platform.

  23. “creating a pervasive Augmented Reality paradigm, where information is presented in a ‘user’ rather than a ‘device’ centric way” https://venturi.fbk.eu FP7-ICT-2011-1.5 Networked Media and Search Systems End-to-end Immersive and Interactive Media Technologies Co-ordinated by Paul Chippendale, Fondazione Bruno Kessler

More Related