1 / 34

CS395/495: Spring 2004

CS395/495: Spring 2004. IBMR: Image Based Modeling and Rendering Introduction Jack Tumblin jet@cs.northwestern.edu. Admin: How this course works. Refer to class website: (soon) http://www.cs.northwestern.edu/~jet/Teach/2004_3spr_IBMR/IBMRsyllabus2004.htm Tasks:

Download Presentation

CS395/495: Spring 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS395/495: Spring 2004 IBMR: Image Based Modeling and Rendering Introduction Jack Tumblin jet@cs.northwestern.edu

  2. Admin: How this course works • Refer to class website: (soon)http://www.cs.northwestern.edu/~jet/Teach/2004_3spr_IBMR/IBMRsyllabus2004.htm • Tasks: • Reading, lectures, class participation, projects • Evaluation: • Progressive Programming Project • Take-Home Midterm • In-class project demo; no final exam

  3. GOAL: First-Class Primitive • Want images as ‘first-class’ primitives • Useful as BOTH input and output • Convert to/from traditional scene descriptions • Want to mix real & synthetic scenes freely • Want to extend photography • Easily capture scene:shape, movement, surface/BRDF, lighting … • Modify & Render the captured scene data • --BUT-- • images hold only PARTIAL scene information “You can’t always get what you want”–(Mick Jagger 1968)

  4. Back To Basics: Scene & Image Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point Image Plane I(x,y) Position(x,y) Angle(,)

  5. Trad. Computer Graphics Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point Reduced, Incomplete Information Image Plane I(x,y) Position(x,y) Angle(,)

  6. Trad. Computer Vision Light + 3D Scene: Illumination, shape, movement, surface BRDF,… 2D Image: Collection of rays through a point !TOUGH! ‘ILL-POSED’ Many Simplifications, External knowledge… Image Plane I(x,y) Position(x,y) Angle(,)

  7. IBMR Goal: Bidirectional Rendering • Both forward and ‘inverse’ rendering! ‘3D Scene’ Description ?! New Research? ‘Optical’ Description Camera PoseCamera View Geom Scene illuminationObject shape, positionSurface reflectance,… transparency Traditional Computer Graphics 2D Display Image(s) 2D Display Image(s) 2D Display Image(s) 2D Display Image(s) IBMR

  8. OLDEST IBR: Shadow Maps (1984) Fast Shadows from Z-buffer hardware: 1) Make the “Shadow Map”: • Render image seen from light source, BUT • Keep ONLY the Z-buffer values (depth) 2) Render Scene from Eyepoint: • Pixel + Z depth gives 3D position of surface; • Project 3D position into Shadow map image • If Shadow Map depth < 3D depth, SHADOW!

  9. Early IBR: QuickTime VR (Chen, Williams ’93) 1) Four Planar Images  1 Cylindrical Panorama: Re-sampling Required! Planar Pixels: equal distance on x,y plane (tan-1) Cylinder Pixs: horiz: equal angle on cylinder () vert: equal distance on y (tan-1)

  10. Early IBR: QuickTime VR (Chen, Williams ’93) 1) Four Planar Images  1 Cylindrical Panorama: IN: OUT:

  11. Early IBR: QuickTime VR (Chen, Williams ’93) 2) Windowing, Horizontal-only Reprojection: IN: OUT:

  12. Early IBR: QuickTime VR (Chen, Williams ’93) 2) Windowing, Horizontal-only Reprojection: IN: OUT:

  13. View Interpolation: How? • But what if no depth is available? • Traditional Stereo Disparity Map: pixel-by-pixel search for correspondence

  14. View Interpolation: How? • Store Depth at each pixel: reproject • Coarse or Simple 3D model:

  15. Plenoptic Array: ‘The Matrix Effect’ • Brute force!Simple arc, line, or ring array of cameras • Synchronized shutter http://www.ruffy.com/firingline.html • Warp/blend between images to change viewpoint on ‘time-frozen’ scene:

  16. Plenoptic Function (Adelson, Bergen `91) • for a given scene, describe ALL rays through • ALLpixels, of • ALL cameras, at • ALL wavelengths, • ALL time F(x,y,z,,,, t) “Eyeballs Everywhere” function (5-D x 2-D!) … … … … … … … … … … …

  17. Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm 1)Manually set some corresp.points (eye corners, etc.) 2) pre-warp and post-warp to match points in 3D, 3) Reproject for Virtual cameras

  18. Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm

  19. Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm

  20. Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm

  21. Seitz: ‘View Morphing’ SIGG`96 • http://www.cs.washington.edu/homes/seitz/vmorph/vmorph.htm

  22. ‘Scene’ causes Light Field Light field: holds all outgoing light rays Shape, Position, Movement, Emitted Light Reflected, Scattered, Light … BRDF, Texture, Scattering Cameras capture subset of these rays.

  23. ? Can we recover Shape ? Can you find ray intersections? Or ray depth? Ray colors might not match for non-diffuse materials (BRDF)

  24. ? Can we recover Surface Material ? Can you find ray intersections? Or ray depth? Ray colors might not match for non-diffuse materials (BRDF)

  25. Hey, wait… • Light field describes light LEAVING the enclosing surface…. • ? Isn’t there a complementary ‘light field’ for the light ENTERING the surface? • *YES* ‘Image-Based Lighting’ too!

  26. ‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene F(xc,yc,c,c,xl,yl l,l, , t) camera

  27. ‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene F(xc,yc,c,c,xl,yl l,l, , t) camera c c

  28. ‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene F(xc,yc,c,c,xl,yll,l, , t) camera Projector (laser brick)

  29. ‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene • (and wavelength and time) F(xc,yc,c,c,xl,yl l,l, , t) camera projector

  30. ‘Full 8-D Light Field’ (10-D, actually: time, ) • Cleaner Formulation: • Orthographic camera, • positioned on sphere around object/scene • Orthographic projector, • positioned on spherearound object/scene • (and wavelength and time) F(xc,yc,c,c,xl,yl l,l, , t) camera projector

  31. ‘Full 8-D Light Field’ (10-D, actually: time, ) • ! Complete ! • Geometry, Lighting, BRDF,… ! NOT REQUIRED ! • Preposterously Huge: (?!?! 8-D function sampled at image resolution !?!?), but • Hugely Redundant: • WavelengthRGB triplets, ignore Time, and Restrict eyepoint movement: maybe ~3 or 4D ? • Very Similar images—use Warping rules? (SIGG2002) • Exploit Movie Storage/Compression Methods?

  32. IBR-Motivating Opinions “Computer Graphics: Hard” • Complex! geometry, texture, lighting, shadows, compositing, BRDF, interreflections, etc. etc., etc., … • Irregular! Visibility,Topology, Render Eqn.,… • Isolated! Tough to use real objects in CGI • Slow! compute-bound, off-line only,… “Digital Imaging: Easy” • Simple! More quality? Just pump more pixels! • Regular! Vectorized, compressible, pipelined… • Accessible! Use real OR synthetic (CGI) images! • Fast! Scalable, Image reuse, a path to interactivity…

  33. Practical IBMR What useful partial solutions are possible? • Texture Maps++: Panoramas, Env. Maps • Image(s)+Depth: (3D shell) • Estimating Depth & Recovering Silhouettes • ‘Light Probe’ measures real-world light • Structured Light to Recover Surfaces… • Hybrids: BTF, stitching, …

  34. Conclusion • Heavy overlap with computer vision: careful not to re-invent & re-name! • Elegant Geometry is at the heart of it all, even surface reflectance, illumination, etc. etc. • THUS: we’ll dive into geometry--all the rest is built on it!

More Related