1 / 40

Computing F Class 8

Computing F Class 8. Read notes Section 4.2. p. p. L 2. L 2. m 1. m 1. m 1. C 1. C 1. C 1. M. M. L 1. L 1. l 1. l 1. e 1. e 1. l T 1. l 2. e 2. e 2. Canonical representation:. l 2. m 2. m 2. m 2. l 2. l 2. Fundamental matrix (3x3 rank 2 matrix). C 2. C 2. C 2.

cmerry
Download Presentation

Computing F Class 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computing FClass 8 Read notes Section 4.2

  2. p p L2 L2 m1 m1 m1 C1 C1 C1 M M L1 L1 l1 l1 e1 e1 lT1 l2 e2 e2 Canonical representation: l2 m2 m2 m2 l2 l2 Fundamental matrix (3x3 rank 2 matrix) C2 C2 C2 Epipolar geometry Underlying structure in set of matches for rigid scenes • Computable from corresponding points • Simplifies matching • Allows to detect wrong matches • Related to calibration

  3. Other entities besides points? Lines give no constraint for two view geometry (but will for three and more views) Curves and surfaces yield some constraints related to tangency (e.g. Sinha et al. CVPR’04)

  4. The projective reconstruction theorem If a set of point correspondences in two views determine thefundamental matrix uniquely, then the scene and cameras may be reconstructed from these correspondences alone, and any two such reconstructions from these correspondences are projectively equivalent allows reconstruction from pair of uncalibrated images!

  5. Today’s menu Computation of F • Linear (8-point) • Minimal (7-point) • Robust (RANSAC) • Non-linear refinement (MLE, …) • Practical approach

  6. Epipolar geometry: basic equation separate known from unknown (data) (unknowns) (linear)

  7. the singularity constraint SVD from linearly computed F matrix (rank 3) Compute closest rank-2 approximation

  8. the minimum case – 7 point correspondences one parameter family of solutions but F1+lF2 not automatically rank 2

  9. 3 F7pts F F2 F1 the minimum case – impose rank 2 (obtain 1 or 3 solutions) (cubic equation) Compute possible l as eigenvalues of (only real solutions are potential solutions)

  10. ~10000 ~100 ~10000 ~100 ~10000 ~10000 ~100 ~100 1 Orders of magnitude difference between column of data matrix  least-squares yields poor results ! the NOT normalized 8-point algorithm

  11. (0,500) (700,500) (-1,1) (1,1) (0,0) (0,0) (700,0) (-1,-1) (1,-1) the normalized 8-point algorithm Transform image to ~[-1,1]x[-1,1] normalized least squares yields good results(Hartley, PAMI´97)

  12. Robust estimation • What if set of matches contains gross outliers? (to keep things simple let’s consider line fitting first)

  13. RANSAC • Objective • Robust fit of model to data set S which contains outliers • Algorithm • Randomly select a sample of s data points from S and instantiate the model from this subset. • Determine the set of data points Si which are within a distance threshold t of the model. The set Si is the consensus set of samples and defines the inliers of S. • If the subset of Si is greater than some threshold T, re-estimate the model using all the points in Si and terminate • If the size of Si is less than T, select a new subset and repeat the above. • After N trials the largest consensus set Si is selected, and the model is re-estimated using all the points in the subset Si

  14. Distance threshold Choose t so probability for inlier is α (e.g. 0.95) • Often empirically • Zero-mean Gaussian noise σ then follows distribution with m=codimension of model (dimension+codimension=dimension space)

  15. How many samples? Choose N so that, with probability p, at least one random sample is free from outliers. e.g. p=0.99 Note: Assumes that inliers allow to identify other inliers

  16. Acceptable consensus set? • Typically, terminate when inlier ratio reaches expected ratio of inliers

  17. Adaptively determining the number of samples e is often unknown a priori, so pick worst case, i.e. 0, and adapt if more inliers are found, e.g. 80% would yield e=0.2 • N=∞, sample_count =0 • While N >sample_count repeat • Choose a sample and count the number of inliers • Set e=1-(number of inliers)/(total number of points) • Recompute N from e • Increment the sample_count by 1 • Terminate

  18. Other robust algorithms • RANSAC maximizes number of inliers • LMedS minimizes median error • Not recommended: case deletion, iterative least-squares, etc. inlier percentile 100% 50% residual (pixels) 1.25

  19. Non-linear refinment

  20. algebraic minimization possible to iteratively minimize algebraic distance subject to det F=0 (see H&Z if interested)

  21. Geometric distance Gold standard Symmetric epipolar distance

  22. Gold standard Maximum Likelihood Estimation (= least-squares for Gaussian noise) Initialize: normalized 8-point, (P,P‘) from F, reconstruct Xi Parameterize: (overparametrized) Minimize cost using Levenberg-Marquardt (preferably sparse LM, e.g. see H&Z)

  23. Gold standard Alternative, minimal parametrization (with a=1) (note (x,y,1) and (x‘,y‘,1) are epipoles) • problems: • a=0  pick largest of a,b,c,d to fix to 1 • epipole at infinity  pick largest of x,y,w and of x’,y’,w’ 4x3x3=36 parametrizations! reparametrize at every iteration, to be sure

  24. Symmetric epipolar error

  25. Some experiments:

  26. Some experiments:

  27. Some experiments:

  28. Some experiments: Residual error: (for all points!)

  29. Recommendations: • Do not use unnormalized algorithms • Quick and easy to implement: 8-point normalized • Better: enforce rank-2 constraint during minimization • Best: Maximum Likelihood Estimation (minimal parameterization, sparse implementation)

  30. Special case: Enforce constraints for optimal results: Pure translation (2dof), Planar motion (6dof), Calibrated case (5dof)

  31. (generate hypothesis) (verify hypothesis) Automatic computation of F Step 1. Extract features Step 2. Compute a set of potential matches Step 3. do Step 3.1 select minimal sample (i.e. 7 matches) Step 3.2 compute solution(s) for F Step 3.3 determine inliers until (#inliers,#samples)<95% Step 4. Compute F based on all inliers Step 5. Look for additional matches Step 6. Refine F based on all correct matches

  32. Abort verification early O O O O O I O O I O O O O I I I O I I I I O O I O I I I I I I O I I I I I I O I O O I I I I… Assume percentage of inliers p, what is the probability P to pick n or more wrong samples out of m? stop if P<0.05 (sample most probably contains outlier) (P=cum. binomial distr. funct. (m-n,m,p )) To avoid problems this requires to also verify at random! (but we already have a random sampler anyway) (inspired from Chum and Matas BMVC2002) O O O O O O O O O O O I O O O O

  33. Finding more matches restrict search range to neighborhood of epipolar line (e.g. 1.5 pixels) relax disparity restriction (along epipolar line)

  34. Degenerate cases: • Degenerate cases • Planar scene • Pure rotation • No unique solution • Remaining DOF filled by noise • Use simpler model (e.g. homography) • Solution 1: Model selection (Torr et al., ICCV´98, Kanatani, Akaike) • Compare H and F according to expected residual error (compensate for model complexity) • Solution 2: RANSAC • Compare H and F according to inlier count (see next slide)

  35. RANSAC for quasi-degenerate cases • Full model (8pts, 1D solution) • Planar model (6pts, 3D solution) Accept if large number of remaining inliers • Plane+parallax model (plane+2pts) • (accept inliers to solution F) • (accept inliers to solution F1,F2&F3) • closest rank-6 of Anx9 for all plane inliers • Sample for out of plane points among outliers

  36. More problems: • Absence of sufficient features (no texture) • Repeated structure ambiguity • Robust matcher also finds • support for wrong hypothesis • solution: detect repetition (Schaffalitzky and Zisserman, BMVC‘98)

  37. RANSAC for ambiguous matching • Include multiple candidate matches in set of potential matches • Select according to matching probability (~ matching score) • Helps for repeated structures or scenes with similar features as it avoids an early commitment (Tordoff and Murray ECCV02)

  38. geometric relations between two views is fully described by recovered 3x3 matrix F two-view geometry

  39. Next class: rectification and stereo image I´(x´,y´) image I(x,y) Disparity map D(x,y) (x´,y´)=(x+D(x,y),y)

More Related