1 / 31

Various camera configurations

This article discusses the concept of epipolar geometry and its applications in reconstructing three-dimensional objects. It explores the use of projective transformations, homogeneous coordinates, perspective projection, and the essential matrix constraint. The Longuet-Higgins 8-point algorithm is presented as a method for recovering 3D structure from 2 views with unknown relative orientations of cameras. Bundle adjustment and minimizing re-projection error are also covered. Engineering considerations and recommended reading are provided.

tpeters
Download Presentation

Various camera configurations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Various camera configurations • Single point of fixation where optical axes intersect • Optical axes parallel (fixation at infinity) • General case

  2. Epipolar geometry for cameras in general position

  3. Epipolar geometry for cameras in general position

  4. Epipolar geometry for cameras in general position

  5. Epipolar geometry for cameras in general position

  6. Epipolar geometry for cameras in general position

  7. Epipolar geometry for cameras in general position

  8. Epipolar geometry for cameras in general position

  9. Epipolar geometry for cameras in general position

  10. Applications • Reconstructing a three dimensional object given two or more photos. The relative orientation of the cameras is unknown, such as when using images downloaded from the internet, taken by different people at different times. • Some examples from Univ. Washington/Microsoft Research: • phototour.cs.washington.edu • photosynth.net • grail.cs.washington.edu/rome/

  11. The approach

  12. But first, let us review projective transformations

  13. Projective Transformations • Under perspective projection, parallel lines can map to lines that intersect. Therefore, this cannot be modeled by an affine transform! • Projective transformations are a more general family which includes affine transforms and perspective projections. • Projective transformations are linear transformations using homogeneous coordinates

  14. Homogeneous coordinates • Instead of using n coordinates for n-dimensional space, we use n+1 coordinates.

  15. Key rule

  16. Picking a canonical representative

  17. The projective line • Any finite point x can be represented as • Any infinite point can be expressed as

  18. The projective plane • Any finite point can be represented as • Any infinite point can be represented as

  19. Lines in homogeneous coordinates

  20. Incidence of points on lines

  21. Incidence of points on lines

  22. Incidence of points on lines

  23. Line incident on two points

  24. Representing affine transformations

  25. Perspective Projection

  26. Projective transformations

  27. The Essential Matrix Constraint

  28. Proof is based on the co-planarity of three rays

  29. Coplanarity of 3-vectors implies triple product is zero

  30. Longuet-Higgins 8 point algorithm

  31. Summary • The basic module of recovering 3D structure from 2 views with relative orientation (R, t) of cameras unknowncan be implemented using the Longuet-Higgins 8 point algorithm. • The outer loop combines information from all the cameras in a global coordinate systemusing bundle adjustment. The error that is minimized is the re-projection error. The big idea is that given the guessed 3d positions of a point, one can predict image plane 2d positions in any camera where it is visible. We wish to minimize the squared error between this predicted position and the actual position, summed over all cameras and over all points. • Lots of engineering has gone into making these approaches work. Read Szeliski’s book, Chapter 7, for more.

More Related