1 / 19

DK2 and Latency Mitigation

DK2 and Latency Mitigation. Cass Everitt Oculus VR. Being There. Conventional 3D graphics is cinematic Shows you something On a display, in your environment VR graphics is immersive Takes you somewhere Controls everything you see, defines your environment

mariah
Download Presentation

DK2 and Latency Mitigation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DK2 and Latency Mitigation Cass Everitt Oculus VR

  2. Being There • Conventional 3D graphics is cinematic • Shows you something • On a display, in your environment • VR graphics is immersive • Takes you somewhere • Controls everything you see, defines your environment • Very different constraints and challenges

  3. Realism and Presence • Being there is largely about sensor fusion • Your brain’s sensor fusion • Trained by reality • Can’t violate too many hard-wired expectations • Realism may be a non-goal • Not required for presence • Expensive • Uncanny valley

  4. Oculus Rift DK2 • 90°-110° FOV • 1080p OLED screen • 960x1080 per eye • 75 Hz refresh • Low persistence • 1 kHz IMU • Positional tracking

  5. Low Persistence • Stable image as you turn - no motion blur • Rolling shutter • Right-to-left • 3ms band of light • Eyes offset temporally

  6. Positional Tracking • External camera, pointed at user • 80° x 64° FOV • ~2.5m range • ~0.05mm @ 1.5m • ~19ms latency • Only 2ms of that is vision processing

  7. Position Tracking + = technology magic The good news: You don’t need to know.

  8. Image Synthesis • Conventional planar projection • GPUs like this because • Straight edges remain straight • Planes remain planar after projection • Synthesis takes “a while” • So we predict the position / orientation • A long range prediction: ~10-30ms out

  9. Note on Sample Distribution • Conventional planar projection, not great for very wide FOV • Big angle between samples at center of view

  10. AlternativeSample Distributions • Direct render to cube map may be appealing • Tiled renderers could do piecewise linear • Brute force will do in the interim • But not much FOV room left at 100°

  11. Optical Distortion

  12. Distortion Correction

  13. Optical Distortion • HMD optics cause different sample distribution – and chromatic aberration • Requires a resampling pass • Synthesis distribution -> delivery distribution • Barrel distortion to counteract lens’s distortion • Could be built in to a “smarter” display engine • Handled in software today • Requires either CPU, separate GPU, or shared GPU

  14. Display Engine (detour) • In modern GPUs, the 3D synthesis engine builds buffers to be displayed • A separate enginedrives the HDMI / DP / DVI output signal using that buffer • This engine just reads rows of the image • More on this later…

  15. Time Warp • Optical resampling provides an opportunity • Synthesized samples have known location • Global shutter, so constant time • Actual eye orientation will differ • Long range prediction had error • Better prediction just before resampling • Both predictions are for the same target time • So resample for optics and prediction error simultaneously! • Note: This just corrects the view of an “old” snapshot of the world

  16. Time Warp + Rolling Shutter • Rolling shutter adds time variability • But we know time derivative of orientation • Can correct for that as well • Tends to compress sampling when turning right • And stretch out sampling when turning left

  17. Asynchronous Time Warp • So far, we have been talking about 1 synthesized image per eye per display period • @75 Hz, that’s 150 Hz for image synthesis • Many apps cannot achieve these rates • Especially with wide-FOV rendering • Display needs to be asynchronous to synthesis • Just like in conventional pipeline • Needs to be isochronous– racing the beam • Direct hardware support for this would be straightforward

  18. Asynchronous Time Warp • Slower synthesis requires wider FOV • Will resample the same image multiple times • Stuttering can be a concern • When display and synthesis frequencies “beat” • Ultra-high display frequency mayhelp this • Tolerable synthesis rate still TBD • End effect is, your eyes see the best information we have • Regardless of synthesis rate

  19. Questions? • cass.everitt@oculusvr.com • For vision questions: • dov.katz@oculusvr.com

More Related