1 / 70

776 Computer Vision

776 Computer Vision. Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm , Svetlana Lazebnik , and others). Image alignment. Image from http://graphics.cs.cmu.edu/courses/15-463/2010_fall/. A look into the past. http://blog.flickr.net/en/2010/01/27/a-look-into-the-past/.

kellan
Download Presentation

776 Computer Vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)

  2. Image alignment Image from http://graphics.cs.cmu.edu/courses/15-463/2010_fall/

  3. A look into the past http://blog.flickr.net/en/2010/01/27/a-look-into-the-past/

  4. A look into the past http://komen-dant.livejournal.com/345684.html Leningrad during the blockade

  5. Bing streetside images http://www.bing.com/community/blogs/maps/archive/2010/01/12/new-bing-maps-application-streetside-photos.aspx

  6. Image alignment: Applications Panorama stitching Recognitionof objectinstances

  7. Image alignment: Challenges Small degree of overlap Intensity changes Occlusion,clutter

  8. Image alignment • Two families of approaches: • Direct (pixel-based) alignment • Search for alignment where most pixels agree • Feature-based alignment • Search for alignment where extracted features agree • Can be verified using pixel-based alignment

  9. 2D transformation models • Similarity(translation, scale, rotation) • Affine • Projective(homography)

  10. Let’s start with affine transformations • Simple fitting procedure (linear least squares) • Approximates viewpoint changes for roughly planar objects and roughly orthographic cameras • Can be used to initialize fitting for more complex models

  11. Fitting an affine transformation • Assume we know the correspondences, how do we get the transformation?

  12. Fitting an affine transformation • Linear system with six unknowns • Each match gives us two linearly independent equations: need at least three to solve for the transformation parameters

  13. Fitting a plane projective transformation • Homography: plane projective transformation (transformation taking a quad to another arbitrary quad)

  14. Homography • The transformation between two views of a planar surface • The transformation between images from two cameras that share the same center

  15. Application: Panorama stitching Source: Hartley & Zisserman

  16. Fitting a homography • Recall: homogeneous coordinates Converting fromhomogeneousimage coordinates Converting tohomogeneousimage coordinates

  17. Fitting a homography • Recall: homogeneous coordinates • Equation for homography: Converting from homogeneousimage coordinates Converting to homogeneousimage coordinates

  18. Fitting a homography Unknown scale • Equation for homography: Scaled versions of same vector Rows of H Each element is a 3-vector Rows of H formed into 9x1 column vector 3 equations, only 2 linearly independent

  19. Direct linear transform • H has 8 degrees of freedom (9 parameters, but scale is arbitrary) • One match gives us two linearly independent equations • Four matches needed for a minimal solution (null space of 8x9 matrix) • More than four: homogeneous least squares

  20. Robust feature-based alignment • So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align • What if we don’t know the correspondences?

  21. Robust feature-based alignment ? • So far, we’ve assumed that we are given a set of “ground-truth” correspondences between the two images we want to align • What if we don’t know the correspondences?

  22. Robust feature-based alignment

  23. Robust feature-based alignment • Extract features

  24. Robust feature-based alignment • Extract features • Compute putative matches

  25. Alignment as fitting Least Squares Total Least Squares

  26. Least Squares Line Fitting • Data: (x1, y1), …, (xn, yn) • Line equation: yi = mxi + b • Find (m, b) to minimize y=mx+b (xi, yi)

  27. Problem with “Vertical” Least Squares • Not rotation-invariant • Fails completely for vertical lines

  28. Total Least Squares • Distance between point (xi, yi) and lineax+by=d(a2+b2=1): |axi + byi – d| • Find (a, b, d) to minimize the sum of squared perpendicular distances ax+by=d Unit normal: N=(a, b) (xi, yi)

  29. Least Squares: Robustness to Noise • Least squares fit to the red points:

  30. Least Squares: Robustness to Noise • Least squares fit with an outlier: Problem: squared error heavily penalizes outliers

  31. Robust Estimators • General approach: find model parameters θ that minimize • ri(xi, θ) – residual of ith point w.r.t. model parametersθρ – robust function with scale parameter σ The robust function ρ behaves like squared distance for small values of the residual u but saturates for larger values of u

  32. Choosing the scale: Just right The effect of the outlier is minimized

  33. Choosing the scale: Too small The error value is almost the same for everypoint and the fit is very poor

  34. Choosing the scale: Too large Behaves much the same as least squares

  35. RANSAC • Robust fitting can deal with a few outliers – what if we have very many? • Random sample consensus (RANSAC): Very general framework for model fitting in the presence of outliers • Outline • Choose a small subset of points uniformly at random • Fit a model to that subset • Find all remaining points that are “close” to the model and reject the rest as outliers • Do this many times and choose the best model M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp 381-395, 1981.

  36. RANSAC for line fitting example Source: R. Raguram

  37. RANSAC for line fitting example Least-squares fit Source: R. Raguram

  38. RANSAC for line fitting example • Randomly select minimal subset of points Source: R. Raguram

  39. RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model Source: R. Raguram

  40. RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function Source: R. Raguram

  41. RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model Source: R. Raguram

  42. RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model • Repeat hypothesize-and-verify loop Source: R. Raguram

  43. RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model • Repeat hypothesize-and-verify loop Source: R. Raguram

  44. RANSAC for line fitting example Uncontaminated sample • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model • Repeat hypothesize-and-verify loop Source: R. Raguram

  45. RANSAC for line fitting example • Randomly select minimal subset of points • Hypothesize a model • Compute error function • Select points consistent with model • Repeat hypothesize-and-verify loop Source: R. Raguram

  46. RANSAC for line fitting • Repeat N times: • Draw s points uniformly at random • Fit line to these s points • Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t) • If there are d or more inliers, accept the line and refit using all inliers

  47. Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Source: M. Pollefeys

  48. Choosing the parameters • Initial number of points s • Typically minimum number needed to fit the model • Distance threshold t • Choose t so probability for inlier is p (e.g. 0.95) • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2 • Number of samples N • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e) Source: M. Pollefeys

  49. Adaptively determining the number of samples • Inlier ratio e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2 • Adaptive procedure: • N=∞, sample_count=0 • While N >sample_count • Choose a sample and count the number of inliers • If inlier ratio is highest of any found so far, set e = 1 – (number of inliers)/(total number of points) • RecomputeN from e: • Increment the sample_count by 1 Source: M. Pollefeys

  50. RANSAC pros and cons • Pros • Simple and general • Applicable to many different problems • Often works well in practice • Cons • Lots of parameters to tune • Doesn’t work well for low inlier ratios (too many iterations, or can fail completely) • Can’t always get a good initialization of the model based on the minimum number of samples

More Related