1 / 55

Direct Methods for Visual Scene Reconstruction

Direct Methods for Visual Scene Reconstruction. Presented by Kristin Branson November 7, 2002. Paper by Richard Szeliski & Sing Bing Kang. Problem Statement. How can one extract information from a sequence of images without camera calibration?. World Model. Sequence of images.

tobit
Download Presentation

Direct Methods for Visual Scene Reconstruction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Direct Methods for Visual Scene Reconstruction Presented by Kristin Branson November 7, 2002 Paper byRichard Szeliski & Sing Bing Kang

  2. Problem Statement • How can one extract information from a sequence of images without camera calibration? World Model Sequence of images

  3. Panoramic Mosaicing Image Sequence

  4. Projective Depth Recovery Image Sequence a b da1 db1 3 1 2

  5. Ambiguity What does the 3-D structure look like?

  6. Ambiguity What does the 3-D structure look like? d4 d2 d3 d1

  7. Ambiguity What does the 3-D structure look like? d4 d2 The projective depth is defined up to a projective transform. d3 d1

  8. Ambiguity What does the 3-D structure look like?

  9. Outline • Image transformations. • Direct methods for image registration. • Mosaic construction. • Projective depth recovery.

  10. 2D Transformations Planar Scene How does the square look when we move the camera? Image Plane

  11. Types of Transformations 2D Examples Rigid Rigid+Scaling • Affine Projective

  12. Mathematically • The 2-D planar transformation p’ of point p is p p’ M2D =

  13. 3-D Rigid Transformations How do u and u’ relate? u’ 3-D Scene u Rotation + translation

  14. Mathematically • Calculate the world coordinate p from an image coordinate u: • Calculate image coordinate u’ from p: Viewing matrix M

  15. Panoramas • What if the optical center of the camera does not move? • The images are related by a homography.

  16. Direct Image Registration • For a pair of images, I and I’, minimize the intensity discrepancy

  17. Nonlinear Iterative Minimization • At the minimum of E, Jacobian

  18. Nonlinear Iterative Minimization • At the minimum of E, • Given some estimate of find s.t. • Update : , where Hessian

  19. Levenberg-Marquardt • The Hessian is hard to calculate. • Approximate it as • Levenberg-Marquardt finds locally optimal solutions.

  20. Calculate Ma Mc M Ma Mb Mb Mc Subsample Image I’ Image I Hierarchical Matching Pyramid of image I Pyramid of image I’

  21. Mosaic Construction • How do we stitch together the registered images? • One approach: • Choose one frame’s coordinate system. • Warp all frames into that coordinate system. • Blend together overlapping sections by averaging.

  22. Mosaic Constuction Example

  23. Mosaic Constuction Example

  24. Environment Maps • Color each face a different color. • Unroll into a 2D texture map

  25. Environment Maps • Expand each face.

  26. p2 p1 u2 u1 u4 p3 u3 Environment Maps • For each face, determine the mapping to world coordinates s.t. ui= MFpi

  27. Environment Maps • Warp each image to the coordinate system of each face. • For each face, form a blended image. • Paint the blended image faces into the 2D texture map. • This method can be performed for arbitrary surfaces, including a tesselated sphere.

  28. Cubical Environment Map

  29. Cubical Environment Map

  30. Tesselated Sphere Results

  31. Inconsistencies at frame boundaries Blending • How do we choose pixel values in the mosaic where the images overlap? • Superimposing method:

  32. Blending • How do we choose pixel values in the mosaic where the images overlap? • Weighted averaging:

  33. Blending • How do we choose pixel values in the mosaic where the images overlap? • Weighted averaging: • Overlapping images are averaged, weighted by distance from the center.

  34. Blending • How do we choose pixel values in the mosaic where the images overlap? • Multi-resolution blending: • Overlapping images are averaged, weighted by proximity to desired zoom.

  35. Projective Depth Recovery • Earlier, we saw that if two cameras are related by a 3-D rigid transformation,

  36. Projective Depth Recovery • Earlier, we saw that if two cameras are related by a 3-D rigid transformation, Image coordinate in I’ Translation from O to O’ Projective depth Homography Viewing matrix for I’ Image coordinate in I

  37. Projective Depth Recovery • Earlier, we saw that if two cameras are related by a 3-D rigid transformation, Image coordinate in I’ Projective depth Parallax motion Homography Image coordinate in I

  38. Projective Depth Recovery • Earlier, we saw that if two cameras are related by a 3-D rigid transformation, Image coordinate in I’ Projective depth Parallax motion Homography Image coordinate in I

  39. Algorithmic Idea • Choose a base frame I0to recover projective depth in. • Find Mj, , and di to minimize using nonlinear iterative minimization.

  40. Ambiguity

  41. Ambiguity

  42. Number of Parameters • How many parameters must we estimate? • (8 + 3) n + p, where n is the number of images and p is the number of pixels. • p is large, so the depth map is represented using a tensor-product spline.

  43. Depth j i Splines Depth Map Spline Let the depths at the control vertices vary. Find the depths at all points by interpolation. Spline control vertex Pixel (i,j)

  44. Local Minima • The high dimensionality of the search space increases the chance of finding a nonglobal optimum. • One solution is to initialize the dense algorithm with the results of a feature-based algorithm.

  45. Feature-Based Algorithm • Detect features, for example corners, in each frame. • Find between-frame feature correspondences.

  46. Feature-Based Algorithm • Now we have the locations of each feature i in each frame j,vij. • Find the transformation (Mj, , di) to minimize through nonlinear iterative minimization.

  47. Feature-Based Algorithm • Now we have the locations of each feature i in each frame j,vij. • Find the transformation (Mj, , di) to minimize through nonlinear iterative minimization. Inverse variance weight Location of feature i in base frame

  48. Algorithm Initialization • Simple approach: initialize • Faster approach: • Fix and , then solve for Mj. • Estimate and

  49. Input image Novel image Input image Virtual camera View Interpolation

  50. View Interpolation • We can approximate the Euclidean depth map from the projective depth map. • From the Euclidean depth map, we can synthesize novel views of a scene.

More Related