1 / 30

Final Presentation

Final Presentation. Lu Zhang Advisor: Dr. Besma Abidi Imaging, Robotics, & Intelligent Systems Laboratory The University of Tennessee April 23, 2005. Outline. Self-calibration and 3D Reconstruction from uncalibrated image sequences * Stratified self-calibration

stevenboyd
Download Presentation

Final Presentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Final Presentation Lu Zhang Advisor: Dr. Besma Abidi Imaging, Robotics, & Intelligent Systems Laboratory The University of Tennessee April 23, 2005

  2. Outline • Self-calibration and 3D Reconstruction from uncalibrated image sequences * Stratified self-calibration * Flexible self-calibration * 3D reconstruction • Conclusion and future work

  3. Stratified self-calibration • What is self-calibration? refers to the process of calculating all the intrinsic parameters of the camera using only the information available in the images taken by that camera. • Any problems in traditional self-calibration methods? Yes! There are two major problems 1. They start from the projective calibration and immediately try to solve for the intrinsic parameters at once from nonlinear equations 2. Most self-calibration algorithms are concerned with unknown but constant intrinsic camera parameters, how can we deal with the problems if focal length changes?

  4. Stratified self-calibration • Why use Stratified self-calibration? Stratified self-calibration starts from a projective reconstruction and then to affine reconstruction, finally to metric reconstruction from solving linear equations. • Stratification of geometry Projective Affine Metric

  5. Stratified self-calibration • Modulus constraint we reduce the ambiguity on the reconstruction by imposing Modulus Constraint on the intrinsic camera parameters. • the infinity homography from view I to j is is conjugated with a rotation matrix, so the 3 eigenvalues of must have the same noduli. if the condition is Therefore between every pair of views a modulus constraint is

  6. Stratified self-calibration • Find the absolute conic when we obtain the plane at infinity, use equation when to obtain

  7. Stratified self-calibration • Stratified self-calibration algorithm Step 1: affine calibration 1. formulate the modulus constraint for all pairs of views, at least need 3 view 2. (for n>3) solve the set of equations 3. compute the affine projection matrices Step 2: metric calibration 1. compute the dual image of the absolute conic 2. find the intrinsic parameters K 3. compute the metric projection matrices

  8. Flexible self-calibration • Why flexible self-calibration? 1. This method can deal with varying intrinsic camera parameters. 2. This is important since it allows the use of zoom and auto-focus available on most cameras • The constraints: is given in terms of intrinsic camera parameters, is the projection of the absolute dual quadric

  9. Flexible self-calibration • linear self-calibration In linear method, the camera intrinsic parameter are squared in . The first step consists of normalizing the projection matrices. The following normalization is proposed where w and h are the width, height of the image.

  10. Flexible self-calibration to make the equation right-hand side equals to left-hand side To obtain

  11. 3D reconstruction • Overview of this system This system implement 3D reconstruction from a sequence of images taken with consumer cameras. The camera is moving around the object freely and neither the camera motion nor the camera settings have to be known. The obtained 3D model in this system is a scaled version of the original object.

  12. 3D-Reconstruction • The pipeline of this system This system uses full perspective cameras and does not require prior modals nor calibration • It has four main steps: 1. projective reconstruction 2. self-calibration 3. dense depth estimation 4. Modeling

  13. 3D-Reconstruction • Projective reconstruction: 1. Feature point extraction In this system the Harris corner detector is used. I(x,y) is the grey level intensity. If at a certain point the two eigenvalues of the matrix are large, then a small motion in any direction will cause an important change of grey level. This indicates that the point is a corner.

  14. 3D-Reconstruction 2. Feature point extraction Corner function: k=0.04 we define the points as corners with a value R above a certain threshold. 3. Feature matching By comparing local neighborhoods of corners through intensity cross-correlation, the system achieve feature matching. the similarity measure function is I and I’ are the intensity values at a certain point and and the mean intensity value of the considered neighborhood. N indicate window size.

  15. p p L2 L2 m1 m1 m1 C1 C1 C1 M M L1 L1 l1 l1 e1 e1 lT1 l2 e2 e2 Initial reconstruction l2 m2 m2 m2 l2 l2 Fundamental matrix (3x3 rank 2 matrix) C2 C2 C2 Epipolar geometry Underlying structure in set of matches for rigid scenes • Computable from corresponding points • Simplifies matching • Allows to detect wrong matches • Related to calibration

  16. Feature points Select strongest features (e.g. 1000/image)

  17. 3 3 2 2 4 4 1 5 1 5 Feature example Gives satisfying results for small image motions

  18. 3D-Reconstruction • Initial reconstruction How to calculate ? Select 7 pairs of corresponding points to compute F with a linear algorithm. • Adding views A minimal sample of 6 Matches is needed to compute Applying this procedure to all the images, we can obtain a group of P

  19. 3D-Reconstruction • Dense Correspondence Matching It operates on image pairs where the epipolar lines coincide with image scan lines. The matcher searches at each pixel in image for maximum normalized cross correlation in by shifting a small measurement window (5x5 or 7x7) along the corresponding scan line. the correspondence function is

  20. Similarity measure Optimal path (dynamic programming ) 3D-Reconstruction Stereo matching is the search step size which will determine the search resolution. light means high cross-correlation and is the optimal path

  21. 3D-Reconstruction • Dense depth estimation Stereo Matching the accordingly point (x‘,y‘) to point (x,y)is (x´,y´)=(x+D(x,y),y)

  22. 3D-Reconstruction • Modeling The 3D surface is approximated by a triangular mesh. The approach in this system is to overlay a 2D triangular mesh on top of the image and then build a corresponding 3D mesh by placing the vertices of the triangles in 3D space according to the values found in the depth map

  23. 3D reconstruction • Project all camera centers into the virtual image and perform a 2D triangulation • the neighboring cameras of a pixel are determined by the corners of the triangle which this pixel belongs to. • For each camera we look up the values in the original image and multiply them with weight 1 at the corresponding vertex and with weight 0 at both other vertices. • The total image is built up as a mosaic of these triangles.

  24. 3D reconstruction • Mapping via local planes consider local depth maps: we calculate the values of image not for the whole scene but for part of the image through the actual triangle • Define a depth function ,the value is determined by • Then calculate the 3D coordinates of those scene points which have the same 2D image coordinates in the virtual view.

  25. 3D reconstruction • The 3D point which corresponds to the real camera k can be calculated as where c is the projection center ,n scales the given 3D vector.

  26. Conclusion and future work • Conclusion This method for self-calibration and 3D reconstruction is a versatile self-calibration method which can deal with varying types of constraints, especially for the cases that the focal length is dynamic. • Future work Work will continue on implementing this 3D reconstruction system and try to apply it to our project.

  27. References • [1.] Olivier Faugeras 'Three Dimensional Computer Vision' • [2.] Dr. Marc Pollefeys “Self-Calibration and Metric 3D Reconstruction from Uncalibrated image sequences” • [3.] Stan Birchfield 'An Introduction to Project Theory(for computer vision)‘ • [4 ]O.D.Faugeras, O.T. Luong and S.J.Maybank 'Camera Self-Calibration: Theory and Experiments‘ • [5] Richard Hartley and Andrew Zisserman‘Multiple view Geometry in computer vision’ • [6] M. Pollefeys, L. Van Gool and M. Proesmans, “Euclidean 3D Reconstruction from Image Sequences with Variable Focal Lengths”, Computer Vision- ECCV’96, Lecture Notes in Computer Science, Vol. 1064, Springer-Verlag, pp. 31-42, 1996.

  28. References cont. • [7] M. Pollefeys, L. Van Gool and A. Oosterlinck, “The Modulus Constraint: A New Constraint for Self-Calibration”, Proc. 13th International Conference on Pattern Recognition, IEEE Computer Soc. Press, pp. 349-353, 1996. • [8] M. Pollefeys and L. Van Gool, “A stratified approach to self-calibration”, Proc.1997 Conference on Computer Vision and Pattern Recognition, IEEE Computer Soc. Press, pp. 407-412, 1997. • [9] M. Pollefeys and L. Van Gool, “Self-calibration from the absolute conic on the plane at infinity”, Proc. Computer Analysis of Images and Patterns, Lecture Notes in Computer Science, Vol. 1296, Springer-Verlag, pp. 175-182, 1997. • [10] M. Pollefeys, L. Van Gool and T. Moons. “Euclidean 3D reconstruction from stereo sequences with variable focal lengths”, Proc.Asian Conference on Computer Vision, Vol.2, pp.6-10, Singapore, 1995

  29. References cont. • [11] O. Faugeras, “What can be seen in three dimensions with an uncalibrated stereo rig”, Computer Vision - ECCV’92, Lecture Notes in Computer Science, Vol. 588, Springer-Verlag, pp. 563-578, 1992.

  30. Thanks a lot for Dr. Besma to give me a lot of valuable instructions. Thanks a lot for every professor to come to my presentation. Questions?

More Related