1 / 49

A Photon Accurate Model Of The Human Eye Michael F. Deering

A Photon Accurate Model Of The Human Eye Michael F. Deering. Use Graphics Theory To Simulate Vision. Motivation. Understanding the interactions between rendering algorithms, cameras, and display devices with the human eye and visual perception.

kiet
Download Presentation

A Photon Accurate Model Of The Human Eye Michael F. Deering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Photon Accurate Model Of The Human Eye Michael F. Deering

  2. Use Graphics Theory To Simulate Vision

  3. Motivation • Understanding the interactions between rendering algorithms, cameras, and display devices with the human eye and visual perception. • Use this to improve (or not improve) rendering algorithms, cameras, and displays.

  4. Graphics/Vision System Display Neural Processing Eye Image Generation Post Production Display Photons

  5. Concrete Software Deliverable A computer program to: • Simulate, photon by photon, several frames of video display into an anatomically accurate human eye retinal sampling array.

  6. Overview • Photon counts • Display device pixel model • Eye optical model • Rotation of eye due to “drifts” • Retinal synthesizer • Diffraction computation • Results: rendering video photons into eye

  7. Photons In This Room:4K Lumens: 1019 Photons/Sec 17’ ~600 photons/60th sec per pixel per cone 14’ 75’

  8. Display Pixel Model Each pixel color sub-component has: • Spatial envelope (shape, including fill factor) • Spectral envelope (color) • Temporal envelope (time)

  9. Trinitron™ CRT Pixel

  10. Direct View LCD Pixel

  11. 3 Chip DLP™ Pixel

  12. 1 Chip DLP™ Pixel

  13. 1 Chip DLP™ In This Room

  14. Optical Model Of The Eye: Schematic Eyes • Historically comprised of 6 quadric surfaces. • Real human eyes are quite a bit more complex. • My model is based on:“Off-axis aberrations of a wide-angle schematic eye Model”, Escudero-Sanz & Navarro, 1999.

  15. Eye Model

  16. Rotation Of The Eye Due To “Drift” • When seeing, the eye almost always is slowly drifting at 6 to 30 minutes of arc per second relative to the point of fixation. • The induced motion blur is important for perception, but rarely modeled. • (The eye also has tremor, micro-saccades, saccades, pursuit motions, etc.)

  17. Why The Eye Sampling Pattern Matters

  18. Roorda And Williams Image

  19. Synthetic Retina Generation • Some existing efforts take real retinal images as representative patches then flip and repeat. Others just perturb a triangular lattice. • I want all 5 million cones • New computer model to generate retinas to order (not synthesizing rods yet).

  20. Retina Generation Algorithm For more details, attend implementation sketch “A Human Eye Cone Retinal Synthesizer” Wednesday 8/3 10:30 am session Room 515B (~11:25 am)

  21. Growth Sequence Movie

  22. Growth Movie Zoom

  23. Retinal Zoom Out Movie

  24. 3D Fly By Movie

  25. Roorda Blood Vessel

  26. Roorda Verses Synthetic

  27. The Human Eye Verses Simple Optics Theory All eye optical axes unaligned: • Fovea is 5 degrees off axis • Pupil is offset ~0.5 mm • Lens is tilted (no agreement on amount) • Rotational center: 13 mm back, 0.5 mm nasal Eye image surface is spherical

  28. Blur And Diffraction Just blur Blur and diffraction

  29. Generating a Diffracted Point Spread Function (DPSF) • Trace the wavefront of a point source as 16 million rays. • Repeat for 45 spectral wavelengths. • Repeat for every retinal patch swept out by one degree of arc in both directions.

  30. Diffracted Point Spread Functions Movie

  31. Putting It All Together • Generate synthetic retina. • Compute diffracted point spread functions by tracing wavefronts through optical model. • Simulate, photon by photon, a video sequence into eye cones. • Display cone photon counts as colored images.

  32. Simulating Display And Eye • For each frame of the video sequence: • For each pixel in each frame: • For each color primary in each pixel: • From the color primary intensity, compute the number of photons that enter the eye • For each simulated photon, generate a sub-pixel position, sub-frame time, and wavelength

  33. Simulating Display And Eye • From the sub-frame time of the photon, interpolate the eye rotation due to “drift”. • From the position and wavelength of the photon, interpolate the diffracted point spread function. • Interpolate and compute the effect of pre-receptoral filters: culls ~80% of photons.

  34. Simulating Display And Eye • Materialize the photon at a point within the DPSF parameterized by a random value. • Compute cone hit, cull photons that miss. • Apply Stiles-Crawford Effect (I), cull photons. • Compute cone photopigment absorptance; cull photons not absorbed. • Increment cone photon count by 1.

  35. 30x30 Pixel Face Input

  36. Retinal Image Results

  37. Lumen Ramp Movie

  38. 30x30 Pixel Movie

  39. Result Movie

  40. How To Test Model?

  41. How To Test Model? • Test it the same way we test real eyes.

  42. 20/27 20/20 20/15 20/12 20/9 20/9 20/12 20/15 20/20 20/27

  43. Sine Frequency Ramp 80 cycles/ 20 cycles/ 40 cycles/

  44. Maximum Drift Movie

  45. Maximum Track Movie

  46. Next Steps • Continue validation of model and adding features. • Simulate deeper into the visual system: • Retinal receptor fields • Lateral geniculate nucleus • Simple and complex cells of the visual cortex.

  47. Acknowledgements • Michael Wahrman for the RenderMan™ rendering of the cone data. • Julian Gómez and the anonymous SIGGRAPH reviewers for their comments on the paper.

More Related