1 / 51

Camera Culture

Computational Photography: Advanced Topics. Camera Culture. Ramesh Raskar. Paul Debevec. Jack Tumblin. Speaker: Jack Tumblin. Associate Professor of Computer Science at Northwestern Univ .

sbreton
Download Presentation

Camera Culture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Photography: Advanced Topics Camera Culture Ramesh Raskar Paul Debevec Jack Tumblin

  2. Speaker: Jack Tumblin Associate Professor of Computer Science at Northwestern Univ. His “Look Lab” group pursues research on new methods to capture and manipulate the appearance of objects and surroundings, in the hope that hybrid optical/computer methods may give us new ways to see, explore, and interact with objects and people anywhere in the world. During his doctoral studies at Georgia Tech and post-doc at Cornell, he investigated tone-mapping methods to depict high-contrast scenes. His MS in Electrical Engineering (December 1990) and BSEE (1978), also from Georgia Tech, bracketed his work as co-founder of IVEX Corp., (>45 people as of 1990) where his flight simulator design work was granted 5 US Patents. He was an Associate Editor of ACM Transactions on Graphics (2000-2006), a member of the SIGGRAPH Papers Committee (2003, 2004), and in 2001 was a Guest Editor of IEEE Computer Graphics and Applications. http://www.cs.northwestern.edu/~jet

  3. Speaker: Paul Debevec Research Associate Professor ,University of Southern California and the Associate director of Graphics Research,USC's Institute for Creative Technologies. Debevec's Ph.D. thesis (UC Berkeley, 1996) presented Façade, an image-based modeling and rendering system for creating photoreal architectural models from photographs. Pioneer in high dynamic range photography, he demonstrated new image-based lighting techniques in his films Rendering with Natural Light (1998), Fiat Lux (1999), and The Parthenon (2004); he also led the design of HDR Shop, the first high dynamic range image editing program. At USC ICT, Debevec has led the development of a series of Light Stage devices used in Spider Man 2 and Superman Returns. He is the recipient of ACM SIGGRAPH's first Significant New Researcher Award and a co-author of the 2005 book High Dynamic Range Imaging from Morgan Kaufmann. http://www.debevec.org

  4. Speaker: Ramesh Raskar Associate Professor, MIT Media Lab. Previously at MERL as a Senior Research Scientist. His research interests include projector-based graphics, computational photography and non-photorealistic rendering. He has published several articles on imaging and photography including multi-flash photography for depth edge detection, image fusion, gradient-domain imaging and projector-camera systems. His papers have appeared in SIGGRAPH, EuroGraphics, IEEE Visualization, CVPR and many other graphics and vision conferences. He was a course organizer at Siggraph 2002 through 2005. He was the panel organizer at the Symposium on Computational Photography and Video in Cambridge, MA in May 2005 and taught a graduate level class on Computational Photography at Northeastern University, Fall 2005. He is a member of the ACM and IEEE. http://raskar.info http://www.media.mit.edu/~raskar

  5. Overview • Unlocking Photography • Not about the equipment but about the goal • Capturing ‘machine readable’ visual experience • Goes beyond what you can see through the viewfinder • Push the envelope with seemingly peripheral techniques and advances • Think beyond post-capture image processing • ‘Computation’ well before image processing and editing • Learn how to build your own camera-toys • Emphasis • Most recent work in graphics/vision (2006 and later) • Research in other fields: Applied optics, novel sensors, materials • Review of 50+ recent papers and projects • What we will not cover • Minimum discussion of graphics/vision papers before 2006 • Epsilon photography (improving camera performance by bracketing) • Film Cameras, Novel view rendering (IBR), Color issues, Traditional image processing/editing

  6. Traditional Photography Detector Lens Pixels Image Courtesy: Shree Nayar

  7. Traditional Photography Detector Lens Pixels Mimics Human Eye for a Single Snapshot: Single View, Single Instant, Fixed Dynamic range and Depth of field for given Illumination in a Static world Image

  8. Picture Computational Photography: Optics, Sensors and Computations GeneralizedSensor Generalized Optics Computations Ray Reconstruction 4D Ray Bender Upto 4D Ray Sampler Merged braketed photos, Coded sensing

  9. Computational Photography Novel Cameras GeneralizedSensor Generalized Optics Processing

  10. Computational Photography Novel Illumination Light Sources Novel Cameras GeneralizedSensor Generalized Optics Processing

  11. Computational Photography Novel Illumination Light Sources Novel Cameras GeneralizedSensor Generalized Optics Processing Scene: 8D Ray Modulator

  12. Computational Photography Novel Illumination Light Sources Novel Cameras GeneralizedSensor Generalized Optics Processing Display Scene: 8D Ray Modulator Recreate 4D Lightfield

  13. Computational Photography Novel Illumination Light Sources Modulators Novel Cameras Generalized Optics GeneralizedSensor Generalized Optics Processing 4D Incident Lighting Ray Reconstruction 4D Ray Bender Upto 4D Ray Sampler 4D Light Field Display Scene: 8D Ray Modulator Recreate 4D Lightfield

  14. What is Computational Photography? • Create photo that could not have been taken by a traditional Camera (?) Goal: Record a richer, multi-layered visual experience • Overcome limitations of today’s cameras • Support better post-capture processing • Relightable photos, Focus/Depth of field, Fg/Bg, Shape boundaries • Enables new classes of recording the visual signal • Moment [Cohen05], Time-lapse, Unwrap mosaics, Cut-views • Synthesize “impossible” photos • Wrap-around views [Rademacher and Bishop 1998]), fusion of time-lapsed events [Raskar et al 2004], motion magnification [Liu et al 2005]), video textures and panoramas [Agarwala et al 2005]. • Exploit previously exotic forms of scientific imaging • Coded aperture [Veeraraghavan 2007, Levin 2007], confocal imaging [Levoy 2004], tomography [Trifonov 2006]

  15. Computational Photography • Epsilon Photography • Low-level vision: Pixels • Multi-photos by perturbing camera parameters • HDR, panorama, … • ‘Ultimate camera’ • Coded Photography • Single/few snapshot • Reversible encoding of data • Additional sensors/optics/illum • ‘Scene analysis’ : (Consumer software?) • Essence Photography • Beyond single view/illum • Not mimic human eye • ‘New art form’

  16. Epsilon Photography • Dynamic range • Exposure bracketing [Mann-Picard, Debevec] • Wider FoV • Stitching a panorama • Depth of field • Fusion of photos with limited DoF[Agrawala04] • Noise • Flash/no-flash image pairs • Frame rate • Triggering multiple cameras [Wilburn04]

  17. Dynamic Range Short Exposure Goal: High Dynamic Range Long Exposure

  18. Epsilon Photography • Dynamic range • Exposure braketing[Mann-Picard, Debevec] • Wider FoV • Stitching a panorama • Depth of field • Fusion of photos with limited DoF[Agrawala04] • Noise • Flash/no-flash image pairs [Petschnigg04, Eisemann04] • Frame rate • Triggering multiple cameras [Wilburn05, Shechtman02]

  19. Computational Photography • Epsilon Photography • Low-level Vision: Pixels • Multiphotos by perturbing camera parameters • HDR, panorama • ‘Ultimate camera’ • Coded Photography • Mid-Level Cues: • Regions, Edges, Motion, Direct/global • Single/few snapshot • Reversible encoding of data • Additional sensors/optics/illum • ‘Scene analysis’ • Essence Photography • Not mimic human eye • Beyond single view/illum • ‘New artform’

  20. 3D • Stereo of multiple cameras • Higher dimensional LF • Light Field Capture • lenslet array [Adelson92, Ng05], ‘3D lens’ [Georgiev05], heterodyne masks [Veeraraghavan07] • Boundaries and Regions • Multi-flash camera with shadows [Raskar08] • Fg/bg matting [Chuang01,Sun06] • Deblurring • Engineered PSF • Motion: Flutter shutter[Raskar06], Camera Motion [Levin08] • Defocus: Coded aperture [Veeraraghavan07,Levin07], Wavefront coding [Cathey95] • Global vs direct illumination • High frequency illumination [Nayar06] • Glare decomposition [Talvala07, Raskar08] • Coded Sensor • Gradient camera [Tumblin05]

  21. Digital Refocusing using Light Field Camera 125μ square-sided microlenses [Ng et al 2005]

  22. 3D • Stereo of multiple cameras • Higher dimensional LF • Light Field Capture • lenslet array [Adelson92, Ng05], ‘3D lens’ [Georgiev05], heterodyne masks [Veeraraghavan07] • Boundaries and Regions • Multi-flash camera with shadows [Raskar08] • Fg/bg matting [Chuang01,Sun06] • Deblurring • Engineered PSF • Motion: Flutter shutter[Raskar06], Camera Motion [Levin08] • Defocus: Coded aperture [Veeraraghavan07,Levin07], Wavefront coding [Cathey95] • Global vs direct illumination • High frequency illumination [Nayar06] • Glare decomposition [Talvala07, Raskar08] • Coded Sensor • Gradient camera [Tumblin05]

  23. Left Top Right Bottom Depth Edges Canny Edges Depth Edges

  24. 3D • Stereo of multiple cameras • Higher dimensional LF • Light Field Capture • lenslet array [Adelson92, Ng05], ‘3D lens’ [Georgiev05], heterodyne masks [Veeraraghavan07] • Boundaries and Regions • Multi-flash camera with shadows [Raskar08] • Fg/bg matting [Chuang01,Sun06] • Deblurring • Engineered PSF • Motion: Flutter shutter[Raskar06], Camera Motion [Levin08] • Defocus: Coded aperture [Veeraraghavan07,Levin07], Wavefront coding [Cathey95] • Global vs direct illumination • High frequency illumination [Nayar06] • Glare decomposition [Talvala07, Raskar08] • Coded Sensor • Gradient camera [Tumblin05]

  25. Flutter Shutter Camera Raskar, Agrawal, Tumblin [Siggraph2006] LCD opacity switched in coded sequence

  26. Coded Exposure Traditional Deblurred Image Deblurred Image Image of Static Object

  27. 3D • Stereo of multiple cameras • Higher dimensional LF • Light Field Capture • lenslet array [Adelson92, Ng05], ‘3D lens’ [Georgiev05], heterodyne masks [Veeraraghavan07] • Boundaries and Regions • Multi-flash camera with shadows [Raskar08] • Fg/bg matting [Chuang01,Sun06] • Deblurring • Engineered PSF • Motion: Flutter shutter[Raskar06], Camera Motion [Levin08] • Defocus: Coded aperture [Veeraraghavan07,Levin07], Wavefront coding [Cathey95] • Decomposition Problems • High frequency illumination, Global/direct illumination [Nayar06] • Glare decomposition [Talvala07, Raskar08] • Coded Sensor • Gradient camera [Tumblin05]

  28. "Fast Separation of Direct and Global Components of a Scene using High Frequency Illumination," S.K. Nayar, G. Krishnan, M. D. Grossberg, R. Raskar, ACM Trans. on Graphics (also Proc. of ACM SIGGRAPH), Jul, 2006.

  29. Separating Reflectance Components withPolarization-Difference Imaging cross-polarizedsubsurface component polarization difference(primarily)specular component normal image

  30. Computational Photography • Epsilon Photography • Multiphotos by varying camera parameters • HDR, panorama • ‘Ultimate camera’: (Photo-editor) • Coded Photography • Single/few snapshot • Reversible encoding of data • Additional sensors/optics/illum • ‘Scene analysis’ : (Next software?) • Essence Photography • High-level understanding • Not mimic human eye • Beyond single view/illum • ‘New artform’

  31. Blind Camera Sascha Pohflepp, U of the Art, Berlin, 2006

  32. Capturing the Essence of Visual Experience • Exploiting online collections • Photo-tourism [Snavely2006] • Scene Completion [Hays2007] • Multi-perspective Images • Multi-linear Perspective [Jingyi Yu, McMillan 2004] • Unwrap Mosaics [Rav-Acha et al 2008] • Video texture panoramas [Agrawal et al 2005] • Non-photorealistic synthesis • Motion magnification [Liu05] • Image Priors • Learned features and natural statistics • Face Swapping: [Bitouk et al 2008] • Data-driven enhancement of facial attractiveness [Leyvand et al 2008] • Deblurring [Fergus et al 2006, 2008 papers

  33. Scene Completion Using Millions of PhotographsHays and Efros, Siggraph 2007

  34. Capturing the Essence of Visual Experience • Exploiting online collections • Photo-tourism [Snavely2006] • Scene Completion [Hays2007] • Multi-perspective Images • Multi-linear Perspective [Jingyi Yu, McMillan 2004] • Unwrap Mosaics [Rav-Acha et al 2008] • Video texture panoramas [Agrawal et al 2005] • Non-photorealistic synthesis • Motion magnification [Liu05] • Image Priors • Learned features and natural statistics • Face Swapping: [Bitouk et al 2008] • Data-driven enhancement of facial attractiveness [Leyvand et al 2008] • Deblurring [Fergus et al 2006, 2008 papers

  35. Andrew Davidhazy

  36. Unwrap Mosaics + Video Editing Rav-Acha et al Siggraph 2008

  37. Capturing the Essence of Visual Experience • Exploiting online collections • Photo-tourism [Snavely2006] • Scene Completion [Hays2007] • Multi-perspective Images • Multi-linear Perspective [Jingyi Yu, McMillan 2004] • Unwrap Mosaics [Rav-Acha et al 2008] • Video texture panoramas [Agrawal et al 2005] • Non-photorealistic synthesis • Motion magnification [Liu05] • Image Priors • Learned features and natural statistics • Face Swapping: [Bitouk et al 2008] • Data-driven enhancement of facial attractiveness [Leyvand et al 2008] • Deblurring [Fergus et al 2006, 2008 papers

  38. Motion Magnification Liu, Torralba, Freeman, Durand, Adelson Siggraph 2005

  39. Motion Magnification Liu, Torralba, Freeman, Durand, Adelson Siggraph 2005

  40. Motion Magnification Liu, Torralba, Freeman, Durand, Adelson Siggraph 2005

  41. Capturing the Essence of Visual Experience • Exploiting online collections • Photo-tourism [Snavely2006] • Scene Completion [Hays2007] • Multi-perspective Images • Multi-linear Perspective [Jingyi Yu, McMillan 2004] • Unwrap Mosaics [Rav-Acha et al 2008] • Video texture panoramas [Agrawal et al 2005] • Non-photorealistic synthesis • Motion magnification [Liu05] • Image Priors • Learned features and natural statistics • Face Swapping: [Bitouk et al 2008] • Data-driven enhancement of facial attractiveness [Leyvand et al 2008] • Deblurring [Fergus et al 2006, 2007-2008 papers]

  42. Face Swapping • Find Candidate face in DB and align • Tune pose, lighting, color and blend • Keep result with optimized matching cost [Bitouk et al 2008]

  43. Computational Photography • Epsilon Photography • Low-level vision: Pixels • Multi-photos by perturbing camera parameters • HDR, panorama, … • ‘Ultimate camera’ • Coded Photography • Mid-Level Cues: • Regions, Edges, Motion, Direct/global • Single/few snapshot • Reversible encoding of data • Additional sensors/optics/illum • ‘Scene analysis’ • Essence Photography • High-level understanding • Not mimic human eye • Beyond single view/illum • ‘New artform’

  44. Submit your questions .. • Today What makes photography hard? What moments you are not able to capture? • Future What do you expect in a camera or photo-software you ‘buy’ in 2020? Please submit by break at 3:30pm Panel Discussion at 5:10pm

  45. Siggraph 2006 16 Computational Photography Papers • Coded Exposure Photography: Motion Deblurring • Raskar et al (MERL) • Photo Tourism: Exploring Photo Collections in 3D • Snavely et al (Washington) • AutoCollage • Rother et al (Microsoft Research Cambridge) • Photographing Long Scenes With Multi-Viewpoint Panoramas • Agarwala et al (University of Washington) • Projection Defocus Analysis for Scene Capture and Image Display • Zhang et al (Columbia University) • Multiview Radial Catadioptric Imaging for Scene Capture • Kuthirummal et al (Columbia University) • Light Field Microscopy (Project) • Levoy et al (Stanford University) • Fast Separation of Direct and Global Components of a Scene Using High Frequency Illumination • Nayar et al (Columbia University) Hybrid Images • Oliva et al (MIT) Drag-and-Drop Pasting • Jia et al (MSRA) Two-scale Tone Management for Photographic Look • Bae et al (MIT) Interactive Local Adjustment of Tonal Values • Lischinski et al (Tel Aviv) Image-Based Material Editing • Khan et al (Florida) Flash Matting • Sun et al (Microsoft Research Asia) Natural Video Matting using Camera Arrays • Joshi et al (UCSD / MERL) Removing Camera Shake From a Single Photograph • Fergus (MIT)

  46. Siggraph 2007 19 Computational Photography Papers • Image Analysis & Enhancement • Image Deblurring with Blurred/Noisy Image Pairs • Photo Clip Art • Scene Completion Using Millions of Photographs • Image Slicing & Stretching • Soft Scissors: An Interactive Tool for Realtime High Quality Matting • Seam Carving for Content-Aware Image Resizing • Image Vectorization Using Optimized Gradient Meshes • Detail-Preserving Shape Deformation in Image Editing • Light Field & High-Dynamic-Range Imaging • Veiling Glare in High-Dynamic-Range Imaging • Ldr2Hdr: On-the-Fly Reverse Tone Mapping of Legacy Video and Photographs • Appearance Capture & Editing • Multiscale Shape and Detail Enhancement from Multi-light Image Collections • Computational Cameras • Active Refocusing of Images and Videos • Multi-Aperture Photography • Dappled Photography: Mask-Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing • Image and Depth from a Conventional Camera with a Coded Aperture • Big Images • Capturing and Viewing Gigapixel Images • Efficient Gradient-Domain Compositing Using Quadtrees • Video Processing • Factored Time-Lapse Video • Computational Time-Lapse Video (project page) • Real-Time Edge-Aware Image Processing With the Bilateral Grid

  47. Siggraph 2008 19 Computational Photography Papers • Computational Photography & Display • Programmable Aperture Photography: Multiplexed Light Field Acquisition • Glare Aware Photography: 4D Ray Sampling for Reducing Glare Effects of Camera Lenses • Light-Field Transfer: Global Illumination Between Real and Synthetic Objects • Deblurring & Dehazing • Motion Invariant Photography • Single Image Dehazing • High-Quality Motion Deblurring From a Single Image • Progressive Inter-scale and intra-scale Non-blind Image Deconvolution • Faces & Reflectance • Data-driven enhancement of facial attractiveness • Face Swapping: Automatic Face Replacement in Photographs (Project) • AppProp: All-Pairs Appearance-Space Edit Propagation • Image Collections & Video • Factoring Repeated Content Within and Among Images • Finding Paths through the World's Photos • Improved Seam Carving for Video Retargeting (Project) • Unwrap Mosaics: A new representation for video editing (Project) • Perception & Hallucination • A Perceptually Validated Model for Surface Depth Hallucination • A Perception-based Color Space for Illumination-invariant Image Processing • Self-Animating Images: Illusory Motion Using Repeated Asymmetric Patterns • Tone & Color • Edge-preserving decompositions for multi-scale tone and detail manipulation • Light Mixture Estimation for Spatially Varying White Balance

  48. Ramesh Raskar and Jack Tumblin • Book Publishers: A K Peters • Siggraph 2008 booth: 20% off • Booth #821

  49. More .. • Articles • IEEE Computer, • August 2006 Special Issue • Bimber, Nayar, Levoy, Debevec, Cohen/Szeliski • IEEE CG&A, • March 2007 Special issue • Durand and Szeliski • Science News cover story • April 2007 • Featuring : Levoy, Nayar, Georgiev, Debevec • American Scientist • February 2008 • Siggraph 2008 • 19 papers • HDRI, Mon/Tue 8:30am • Principles of Appearance Acquisition and Representation • Bilateral Filter course, Fri 8:30am • Other courses .. (Citizen Journalism, Wedn 1:45pm) • First International Conf on Comp Photo, April 2009 • Athale, Durand, Nayar (Papers due Oct 3nd)

More Related