1 / 30

Lytro

Lytro. The first light field camera for the consumer market Todor Georgiev and Andrew Lumsdaine. Origins. Ren’s Dissertation 2006 “Refocusing”

eytan
Download Presentation

Lytro

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lytro The first light field camera for the consumer market Todor Georgiev and Andrew Lumsdaine

  2. Origins • Ren’s Dissertation 2006 “Refocusing” • Levoy-Hanrahan “Light Field” (1996), Gortleret al. “Lumigraph” (1996), Adelson’s “Plenoptic Function” (1991), etc. • Lippmann’s work on capturing radiation with array of lenslets “Integral Photography” (1908). Nobel Prize (Color Photography) • Physical quantity: Radiance = Energy density in 4D ray space.

  3. LytroPlenoptic Camera • Existing commercial platform for characterizing problem space and for new algorithm development and exploration • Company founded in 2006 (as “Refocus Imaging”) to commercialize Ng’s PhD thesis work at Stanford – handheld plenoptic camera • First camera available for sale October 2011 • 11Mpx CCD captures 11 “megarays” • Postprocessing accomplished on host • Originally Mac only • Now Mac + PC • Creates focal stack of images • Illusion of real time refocusing • New effects released Dec 4 2012 • Perspective • Instagram-like effects

  4. Basic Lytro design: Microlens array • Implements microlens array approach to plenoptic imaging • Lytro sensor: 1.4 micron pixels with 14 micron microlenses in hex array sensor Microlens array Optical focus 8X optical zoom

  5. Analysis of traditional camera imaging The outside 3D world is mapped into the inside 3D world. Projective transform mapping points to points, lines to lines, planes to planes. Infinity treated projectivly. Points, lines and plane at infinity handled seemlesly.

  6. Analysis of traditional camera imaging The image plane (sensor) captures sharp all points that happen to be mapped to its location. Everything else has certain amount of blur. This is based on the mapping of rays to rays, points being defined as the vertexes of pencils of rays.

  7. Conventional camera image In a conventional camera only the area around the image plane is in focus (DOF). The rest is blurry. Out of focus In focus Out of focus

  8. Analysis of plenoptic camera imaging If pixels are replaced by microlenses positioned at distance f from the sensor, ray intensities would be directly recorded. Thus the plenoptic camera captures the 4D image of ray intensities (the radiance), and not a 2D image. Full record of the radiance of a scene.

  9. Analysis of plenoptic camera imaging This approach (pixels replaced by microlenses) for recording ray direction produces 1 pixel per microlens, which would be 0.1 megapixels for Lytro. Our MTF measurements show 0.3 megapixels and in certain cases even higher resolution. How is that possible? (see next)

  10. Analysis of plenoptic camera imaging The plenoptic camera as a relay system. Shaded area represents area of good focusing of the microlenses (at Nyquist). In that area we can have full sensor resolution rendering from each microimage. Mixing such microimages produces the high final resolution that we observe. We call this “full resolution rendering.” The unshaded area can render only 1 pixel per microimage, and is inside the hyperfocal distance f ² / p from the microlenses, where p is pixel size. (This is approximately 0.5 mm in Lytro) Georgiev, T., Lumsdaine, A., Depth of Field in Plenoptic Cameras, Eurographics 2009.

  11. Analysis of plenoptic camera imaging In a plenoptic camera DOF is extended, but the central part can never be recovered in focus from individual microimages In focus Out of focus In focus This result is based on our camera similar to Lytro: Georgiev, T., Lumsdaine, A., Depth of Field in Plenoptic Cameras, Eurographics 2009.

  12. Plenoptic 2.0 camera The plenoptic 2.0 camera as a relay system. Shaded area represents good focusing of the microlenses, satisfying the lens equation. In that area we can have full resolution rendering and super resolution that can be 4X better. The unshaded area should be excluded. This approach is good for image capture close to the microlenses, but it has lower DOF. Used by Raytrix. Lumsdaine, A., Georgiev, T., The Focused Plenoptic Camera., ICCP 2009

  13. Lytro: The Captured Image

  14. Lytro: The Rendered Image

  15. Lytro: The Rendered Image

  16. More technical detail Lytro: More technical detail

  17. Lightfield Data for Algorithm Development • Lytro application stores three main sets of data (organized in sqlitedb) • Camera calibration data / modulation images • Raw lightfield files • Processed lightfield files (focal stacks) are computed locally and stored • Raw lightfield files • Not demosaiced • Some meta information about the shot • JSON header plus raw 12-bit data

  18. Factory Calibration • The RAW Microimages show vignetting, noise, random shift of microlenses, etc. To correct, a calibration step is required as imperfections are camera specific. • Modulation images are included with each Lytro camera (12bit images with time stamp). Calibration images summary: • 60 modulation images are captured for each camera at manufacture time (30 min based on file time stamp). Different lens settings, like focus, zoom, exposure. • Two dark images at different exposure. • Our modulation images usage for Lytro rendering: • Divide the captured image by the corresponding modulation image (anti-vignetting) at similar parameters. Clean up pattern noise, dark noise. • Compute the true microimage centers. Use the new centers for rendering. This is the most important calibration in our experience. • Possibly Lytro is using lens model to compute centers.

  19. Lytro: Modulation images

  20. Lytro: Modulation images

  21. Lytro metadata examples "clock": { "zuluTime": "2012-03-27T05:24:30.000Z" }, "pixelPitch": 0.000001399999976158141876680929 }, "lens": { "infinityLambda": 7.0, "focalLength": 0.05131999969482421875, "zoomStep": 100, "focusStep": 832, "fNumber": 2.21000003814697265625, "temperature": 38.569305419921875, "temperatureAdc": 2504, "zoomStepperOffset": 2, "focusStepperOffset": -36, "mla": { "tiling": "hexUniformRowMajor", "lensPitch": 0.00001389861488342285067432158, "rotation": -0.002579216146841645240783691406, "defectArray": [], "scaleFactor": { "x": 1.0, "y": 1.00024712085723876953125 }, So for example, microlens pitch is 13.9um, and the microlens array is estimated to have rotation angle -0.00258 relative to the sensor. Zoom step and focus step change for each picture. Our calibration is done by trying to match the image parameters with calibration images having closest metadata.

  22. Demo of rendering Lytro Demo of rendering Lytro

  23. Lytro MTF: Target 15 and 20cm from the camera 15cm 20cm

  24. Compare with halftone printing by dithering This effect is characteristic for 1.0 camera at the depth corresponding to the microlenses. It’s the price we pay for extending depth of field / good refocusability.

  25. Real world Lytro example

  26. Real world Lytro example

  27. Zoomed in refocusing

  28. Zoomed in refocusing – note the dot artifacts

  29. Microimages of constant color

  30. Conclusion: Lytro and resolution of plenoptic cameras Conclusion: Lytro is the first light field camera for the consumer market. It’s likely that Lytro renders images based on a version of the full resolutionmethod, generating much more than 1 pixel per microlens. The Lytro camera and application appear to reproduce sensor resolution captured by each micro image. However due to mixing of multiple views, final image resolution (under 1 megapixel) is far below sensor resolution (11 megarays). Typical numbers for full resolution rendering from plenoptic camera data are 10X -- 20X less than the sensor resolution. That’s for Lytro and for any other rendering. This situation can be greatly improved with superresolution. Results with resolution only 5X lower than that of the sensor have been demonstrated. Lytro too have been able to generate much higher resolution than their current rendering, in certain cases.

More Related