1 / 37

3D Photography (Image-based Model Acquisition)

3D Photography (Image-based Model Acquisition). Funky Image Goes Here. “Analog” 3D photography !. “3D stereoscopic imaging” been around as long as cameras have Use camera with 2 or more lenses (or stereo attachment) Use stereo viewer to create impression of 3D. Motivation .

lotus
Download Presentation

3D Photography (Image-based Model Acquisition)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3D Photography(Image-based Model Acquisition) Funky Image Goes Here Deepak Bandyopadhyay / UNC Chapel Hill

  2. “Analog” 3D photography ! • “3D stereoscopic imaging” • been around as long as cameras have • Use camera with 2 or more lenses (or stereo attachment) • Use stereo viewer to create impression of 3D Deepak Bandyopadhyay / 258 / 3D Photography

  3. Motivation • Digitizing real world objects • Getting realistic models places objects humans Deepak Bandyopadhyay / 258 / 3D Photography

  4. 3D Photography : Definition • Sometimes called “3D Scanning” • Use cameras and light to capture the shape & appearance of real objects • Shape == geometry (point sampling + surface reconstruction + fairing) • Appearance == surface attributes (color/texture, material properties, reflectance) • Final result = richly detailed model Deepak Bandyopadhyay / 258 / 3D Photography

  5. Applications in Industry • Human body / head / face scans • Avatar creation for virtual worlds • 3d conferencing • medical applications • product design • Platforms: • Cyberware RD3030 • Others (Geomagic, Metacreations, Cyrax, Geometrix…) Deepak Bandyopadhyay / 258 / 3D Photography

  6. More applications • Historical preservation, dissemination of museum artifacts (Digital Michelangelo, Monticello, …) • CAD/CAM (eg. Legacy motorcycle parts scanned by Geomagic for Harley-Davidson). • Marketing (models of products on the web) • 3D games & simulation • Reverse engineering Deepak Bandyopadhyay / 258 / 3D Photography

  7. Technology Overview • The Imaging Pipeline • Real World • Optics • Recorder • Digitizer • Vision & Graphics Deepak Bandyopadhyay / 258 / 3D Photography

  8. Quick Notes on Optics • Model lenses with all their properties - aberration, distortion, flare, vignetting etc. • We correct for some of these effects (eg. distortion) in the calibration, ignore others. • CCD (charged coupled devices) are the most popular recording media. Deepak Bandyopadhyay / 258 / 3D Photography

  9. Theory : Passive Methods • Stereo pair matching • Structure from motion • Shape from shading • Photometric stereo Deepak Bandyopadhyay / 258 / 3D Photography

  10. Stereo Matching • Stereo Matching Basics • Needs two images, like stereoscopy • Given correspondence betweenpoints in 2 views, we can find depth by triangulation • But correspondence is hard prob! • A lot of literature on solving it… • Stereo Matching output • 3D point cloud • Remove outliers and pass through surface reconstructor Deepak Bandyopadhyay / 258 / 3D Photography

  11. Structure from Motion • Camera moving, objects static • Compute camera motion and object geometry from motion of image points • Assumption -orthographic projn (use telephoto) • If: world origin = 3D centroid camera origin = 2D centroidThen: camera translation drops out Deepak Bandyopadhyay / 258 / 3D Photography

  12. Structure from Motion • Camera moving, objects static • Compute camera motion and object geometry from motion of image points Deepak Bandyopadhyay / 258 / 3D Photography

  13. Structure from Motion • Factorization [Tomasi & Kanade, 92] • Find M, S using Singular Value Decomposition of W. SVD gives: S’S modulo linear transform A. Solve for A using constraints on M. Deepak Bandyopadhyay / 258 / 3D Photography

  14. More methods • Shape from shading, [Horn] • Invert Lambert’s Law (L=I k cos )knowing the intensity at image pointto solve for normal • Photometric stereo [Woodham] • An extension of the above • Two or more images under different illumination conditions. • Each image provides one normal • Three images provide unique solution for a pixel. Deepak Bandyopadhyay / 258 / 3D Photography

  15. Active Sensing • Passive methods (eg. stereo matching) suffer from ambiguities - many similar regions in an image correspond to a point in the other. • Project known / regular pattern (“structured light”) into scene to disambiguate • get precise reconstruction by combining views • Laser rangefinder • Projectors and imperceptible structured light Deepak Bandyopadhyay / 258 / 3D Photography

  16. Desktop 3D PhotographyJean-Yves Bouguet, Pietro Perona • An active sensing technique using “weak structured lighting” • Need: camera, lamp, chessboard, pencil, stick • Idea: • Light object with lamp & aim camera at it • Move stick around & capture shadow sequence • Use image of deformed shadow to calc 3D shape Deepak Bandyopadhyay / 258 / 3D Photography

  17. Desktop 3D PhotographyJean-Yves Bouguet, Pietro Perona • Computation of 3d position from the plane of light source, stick and shadow Deepak Bandyopadhyay / 258 / 3D Photography

  18. Volumetric MethodsChevette Project, Debevec, 1991 Deepak Bandyopadhyay / 258 / 3D Photography

  19. Voxel Models from Images • When there are 2 colors in the image - use volume intersection [Szeliski 1993] • Back-project silhouettes from camera views & intersect Deepak Bandyopadhyay / 258 / 3D Photography

  20. Voxel Models from Images • With more colors but constrained viewpoints, we use voxel coloring [Seitz & Dyer, 1997] • Choose a voxel & project to it from all views • Color if enough matches • Prob - determining visibilityof a point from a view • Solution - depth orderedtraversal using a “view indep.d.o.” (dist from separating plane) Deepak Bandyopadhyay / 258 / 3D Photography

  21. Voxel Models from Images • A view-independent depth order may not exist (for some configuration of viewpoints / scene geometry). • Use Space Carving [Kutulakos & Seitz, 1998] • Computes 3D (voxel) shape from multiple color photos • Computes “maximally photo-consistent shape” • maximal superset of all 3D shapes that produce the given photos Deepak Bandyopadhyay / 258 / 3D Photography

  22. Space Carving • Algorithm: a) Initialize V to volume containing true scene b) For each voxel, • check if photo-consistent • if not, remove (“carve”) it. • Can be shown to converge to maximal photo-consistent scene (union of all photo-consistent scenes). Deepak Bandyopadhyay / 258 / 3D Photography

  23. Space Carving : Results • House walkthru - 24 rendered input views • Results best as seen from one of the original views Deepak Bandyopadhyay / 258 / 3D Photography

  24. Modeling from a single view(Criminisi et al, 1999) • Compute 3D affine measurements of the scene from single perspective image • Use minimal geom info • vanishing line for a pencil ofplanes || to reference plane • vanishing point of parallellines along a directionoutside reference plane Deepak Bandyopadhyay / 258 / 3D Photography

  25. Modeling from a single view(Criminisi et al, 1999) • Compute “ratio of parallel distances” • Creating a 3D model from a photograph • horizontal lines used to compute vanishing line • parallel vertical lines used to compute vanishing point • Can generate geometrically correct model from a Renaissance painting (with correct perspective) Deepak Bandyopadhyay / 258 / 3D Photography

  26. Extracting color, reflectance • Photographs have lighting/shading effects that we estimate (reflectance function) and compensate for (specular highlight removal) or change (relighting) • Work of Paul Debevec & others at Berkeley (acquiring reflectance field) • Wood et al at U. Washington (surface light lield for 3D photography) Deepak Bandyopadhyay / 258 / 3D Photography

  27. Surface Light Field[Wood et al, 2000] • A 4D function on the surface - at surface parameter (u,v), for every direction (,), stores the color. • Fixed illumination conditions. • Photographs taken from a lot of different directions sample the surface light field. • Continuous function (piecewise linear over ,) estimated by pointwise fairing. Deepak Bandyopadhyay / 258 / 3D Photography

  28. Reflectance from Photographs (Yu, Debevec et al, 1999) • Estimating reflectance for entire scenes • Too general a problem, parameterize thus: • Assume surface can be divided into patches • Diffuse reflectance function (albedo), varies across a patch • Specular reflectance function taken as const across a region • Assume known lighting, calib, geometry known • Approach - Inverse Global Illumination • Estimate BRDF for direct illumination - f(u,v,,) Deepak Bandyopadhyay / 258 / 3D Photography

  29. Reflectance from Photographs (Yu, Debevec et al, 1999) • Inverse Global Illumination • Known Li (measure), Ii (calc fm known light sources) at every pixel • Estimate BRDF for direct illumination - f(u,v,i,i,r,r) • Write BRDF as a constant diffuse term and a specular term which is a function of incoming & outgoing  and roughness. • Solve for the constants(d, s,) • For indirect illumination - estimate the parameters (and indirect illumination coeffs with other patches) iteratively Deepak Bandyopadhyay / 258 / 3D Photography

  30. Case study - FaçadeDebevec, Taylor & Malik, 1996 • Modeling architectural scenes from photographs • Not fully automatic (user inputs blocky 3D model) • Using blocks leads to fewer params in architectural models • User marks corresponding features on photo • Computer solves for block size, scale, camera rotation by minimizing error of corresponding features • Reprojects textures from the photographs onto the reconstructed model Deepak Bandyopadhyay / 258 / 3D Photography

  31. Arches andSurfaces of Revolution Taj Mahal modeled from one photograph Deepak Bandyopadhyay / 258 / 3D Photography

  32. Case study - Digital Michelangelo Project • 3D scanning of large statues (SIGGRAPH 00) • Separate geometry and color scans • custom rig : laser scanner & camera mounted concurrently • Range scan post-processing • Combine range scans from different positions • Use volumetric modeling methods (Curless, Levoy 1996) • Fill holes using space carving Deepak Bandyopadhyay / 258 / 3D Photography

  33. Case study - Digital Michelangelo Project • Color scan processing • Compensate for ambient lighting • subtract image with & without spotlight • Subtract out shadows & specularities • find surface orientation (inverse lighting computation) • convert color to RGB reflectance (acquire light field) • Using estimated BRDF of marble • modeling subsurface scattering Deepak Bandyopadhyay / 258 / 3D Photography

  34. calibrated motions pitch (yellow) pan (blue) horizontal translation (orange) uncalibrated motions vertical translation remounting the scan head moving the entire gantry Digital MichelangeloScanning a large object Deepak Bandyopadhyay / 258 / 3D Photography

  35. References • [Bouguet98] Bouguet, J.-Y., P. Perona. 3D Photography on your Desk. In Proc. ICCV 1998 • [Bouguet00] Bouguet, J.-Y. Presentation on Desktop 3D Photography, in SIGGRAPH course notes on 3D Photography, 2000 • [Criminisi99] Criminisi, A., I. Reid and A. Zisserman. Single View Metrology. In Proc. ICCV, pp 434-442, September 1999 • [Curless96] Curless, B. and M. Levoy. A Volumetric Method for Building Complex Models from Range Images. In Proc. SIGGRAPH 1996 • [Debevec96] Debevec, P., C. Taylor and J. Malik. Façade - Modeling and Rendering Architectural Scenes from Photographs. In Proc. SIGGRAPH 1996 • [Debevec00a] Debevec, P. Presentation on the Façade, from SIGGRAPH course notes on 3D Photography, 1999, 2000. • [Debevec00b] Debevec, P., T. Hawkins, C. Tchou, H.P.Duiker, W. Sarokin and M. Sagar. Acquiring the Reflectance Field of a Human Face. In Proc. SIGGRAPH 2000. Deepak Bandyopadhyay / 258 / 3D Photography

  36. More References • [Horn70] Horn, B.K.P. Shape from Shading : A Method for Obtaining the Shape of a Smooth Opaque Object from One View. Ph.D. Thesis, Dept of EE, MIT, 1970. • [Kutulakos98] Kutulakos, K. N. and S. Seitz. A Theory of Shape by Space Carving. URCS TR#692, May 1998, appeared in Proc. ICCV 1999. • [Levoy96] Levoy, M. and P. Hanrahan. Light Field Rendering. In Proc. SIGGRAPH 1996. • [Levoy00a] Levoy, M., Pulli, K., Curless, B. et al. The Digital Michelangelo Project - 3D Scanning of Large Statues. In Proc. SIGGRAPH 2000. • [Levoy00b] Levoy, M. Presentation on the Digital Michelangelo Project, in SIGGRAPH course notes on 3D Photography, 2000. • [Seitz97] Seitz & Dyer. Photorealistic Scene Reconstruction by Voxel Coloring. In Proc. CVPR 1997, pp. 1067-1073. Deepak Bandyopadhyay / 258 / 3D Photography

  37. Still More References • [Seitz00] Seitz, S. SIGGRAPH course notes on 3D photography, 1999, 2000. • [Szeliski93] Szeliski, R. Rapid Octree Construction from Image Sequences. CGVIP : Image Understanding, vol. 58, no. 1, pp 23-32, 1993. • [Wood00] Wood, D., D. I. Azuma, K. Aldinger, B. Curless, T. Duchamp, D.H. Salesin and W. Stuetzle. Surface Light Fields for 3D Photography. In Proc. SIGGRAPH 2000. • [Woodham80] Woodham, R. Photometric Stereo for Determining Surface Orientation from Multiple Images. Journal of Optical Engineering, vol. 19, no. 1, pp 138-144, 1980. • [Yu99] Yu, Y., P. Debevec, J. Malik and T. Hawkins. Inverse Global Illumination - Recovering Reflectance Models of Real Scenes from Photographs. Deepak Bandyopadhyay / 258 / 3D Photography

More Related