fitting
Download
Skip this Video
Download Presentation
Fitting

Loading in 2 Seconds...

play fullscreen
1 / 39

Fitting - PowerPoint PPT Presentation


  • 131 Views
  • Uploaded on

Fitting. Fitting. We’ve learned how to detect edges, corners, blobs. Now what? We would like to form a higher-level, more compact representation of the features in the image by grouping multiple features according to a simple model. 9300 Harris Corners Pkwy, Charlotte, NC. Fitting.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Fitting' - raina


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
fitting1
Fitting
  • We’ve learned how to detect edges, corners, blobs. Now what?
  • We would like to form a higher-level, more compact representation of the features in the image by grouping multiple features according to a simple model

9300 Harris Corners Pkwy, Charlotte, NC

fitting2
Fitting
  • Choose a parametric model to represent a set of features

simple model: circles

simple model: lines

complicated model: car

Source: K. Grauman

fitting issues
Fitting: Issues
  • Noise in the measured feature locations
  • Extraneous data: clutter (outliers), multiple lines
  • Missing data: occlusions

Case study: Line detection

fitting overview
Fitting: Overview
  • If we know which points belong to the line, how do we find the “optimal” line parameters?
    • Least squares
  • What if there are outliers?
    • Robust fitting, RANSAC
  • What if there are many lines?
    • Voting methods: RANSAC, Hough transform
  • What if we’re not even sure it’s a line?
    • Model selection
least squares line fitting
Least squares line fitting
  • Data: (x1, y1), …, (xn, yn)
  • Line equation: yi = mxi + b
  • Find (m, b) to minimize

y=mx+b

(xi, yi)

Normal equations: least squares solution to XB=Y

problem with vertical least squares
Problem with “vertical” least squares
  • Not rotation-invariant
  • Fails completely for vertical lines
total least squares
Total least squares
  • Distance between point (xi, yi) and lineax+by=d (a2+b2=1): |axi + byi – d|

ax+by=d

Unit normal: N=(a, b)

(xi, yi)

total least squares1
Total least squares
  • Distance between point (xi, yi) and lineax+by=d (a2+b2=1): |axi + byi – d|
  • Find (a, b, d) to minimize the sum of squared perpendicular distances

ax+by=d

Unit normal: N=(a, b)

(xi, yi)

total least squares2
Total least squares
  • Distance between point (xi, yi) and lineax+by=d (a2+b2=1): |axi + byi – d|
  • Find (a, b, d) to minimize the sum of squared perpendicular distances

ax+by=d

Unit normal: N=(a, b)

(xi, yi)

Solution to (UTU)N = 0, subject to ||N||2 = 1: eigenvector of UTUassociated with the smallest eigenvalue (least squares solution to homogeneous linear systemUN = 0)

total least squares3
Total least squares

second moment matrix

total least squares4
Total least squares

second moment matrix

N = (a, b)

least squares as likelihood maximization
Least squares as likelihood maximization
  • Generative model: line points are sampled independently and corrupted by Gaussian noise in the direction perpendicular to the line

ax+by=d

(u, v)

ε

(x, y)

point on the line

normaldirection

noise:sampled from zero-meanGaussian withstd. dev. σ

least squares as likelihood maximization1

Log-likelihood:

Least squares as likelihood maximization
  • Generative model: line points are sampled independently and corrupted by Gaussian noise in the direction perpendicular to the line

ax+by=d

(u, v)

ε

(x, y)

Likelihood of points given line parameters (a, b, d):

least squares robustness to noise
Least squares: Robustness to noise
  • Least squares fit to the red points:
least squares robustness to noise1
Least squares: Robustness to noise
  • Least squares fit with an outlier:

Problem: squared error heavily penalizes outliers

robust estimators
Robust estimators
  • General approach: find model parameters θ that minimizeri(xi, θ) – residual of ith point w.r.t. model parametersθρ – robust function with scale parameter σ

The robust function ρ behaves like squared distance for small values of the residual u but saturates for larger values of u

choosing the scale just right
Choosing the scale: Just right

The effect of the outlier is minimized

choosing the scale too small
Choosing the scale: Too small

The error value is almost the same for everypoint and the fit is very poor

choosing the scale too large
Choosing the scale: Too large

Behaves much the same as least squares

robust estimation details
Robust estimation: Details
  • Robust fitting is a nonlinear optimization problem that must be solved iteratively
  • Least squares solution can be used for initialization
  • Adaptive choice of scale: approx. 1.5 times median residual (F&P, Sec. 15.5.1)
ransac
RANSAC
  • Robust fitting can deal with a few outliers – what if we have very many?
  • Random sample consensus (RANSAC): Very general framework for model fitting in the presence of outliers
  • Outline
    • Choose a small subset of points uniformly at random
    • Fit a model to that subset
    • Find all remaining points that are “close” to the model and reject the rest as outliers
    • Do this many times and choose the best model

M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp 381-395, 1981.

ransac for line fitting example1
RANSAC for line fitting example

Least-squares fit

Source: R. Raguram

ransac for line fitting example2
RANSAC for line fitting example
  • Randomly select minimal subset of points

Source: R. Raguram

ransac for line fitting example3
RANSAC for line fitting example
  • Randomly select minimal subset of points
  • Hypothesize a model

Source: R. Raguram

ransac for line fitting example4
RANSAC for line fitting example
  • Randomly select minimal subset of points
  • Hypothesize a model
  • Compute error function

Source: R. Raguram

ransac for line fitting example5
RANSAC for line fitting example
  • Randomly select minimal subset of points
  • Hypothesize a model
  • Compute error function
  • Select points consistent with model

Source: R. Raguram

ransac for line fitting example6
RANSAC for line fitting example
  • Randomly select minimal subset of points
  • Hypothesize a model
  • Compute error function
  • Select points consistent with model
  • Repeat hypothesize-and-verify loop

Source: R. Raguram

ransac for line fitting example7
RANSAC for line fitting example
  • Randomly select minimal subset of points
  • Hypothesize a model
  • Compute error function
  • Select points consistent with model
  • Repeat hypothesize-and-verify loop

Source: R. Raguram

ransac for line fitting example8
RANSAC for line fitting example

Uncontaminated sample

  • Randomly select minimal subset of points
  • Hypothesize a model
  • Compute error function
  • Select points consistent with model
  • Repeat hypothesize-and-verify loop

Source: R. Raguram

ransac for line fitting example9
RANSAC for line fitting example
  • Randomly select minimal subset of points
  • Hypothesize a model
  • Compute error function
  • Select points consistent with model
  • Repeat hypothesize-and-verify loop

Source: R. Raguram

ransac for line fitting
RANSAC for line fitting
  • Repeat N times:
  • Draw s points uniformly at random
  • Fit line to these s points
  • Find inliers to this line among the remaining points (i.e., points whose distance from the line is less than t)
  • If there are d or more inliers, accept the line and refit using all inliers
choosing the parameters
Choosing the parameters
  • Initial number of points s
    • Typically minimum number needed to fit the model
  • Distance threshold t
    • Choose t so probability for inlier is p (e.g. 0.95)
    • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
  • Number of samples N
    • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)

Source: M. Pollefeys

choosing the parameters1
Choosing the parameters
  • Initial number of points s
    • Typically minimum number needed to fit the model
  • Distance threshold t
    • Choose t so probability for inlier is p (e.g. 0.95)
    • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
  • Number of samples N
    • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)

Source: M. Pollefeys

choosing the parameters2
Choosing the parameters
  • Initial number of points s
    • Typically minimum number needed to fit the model
  • Distance threshold t
    • Choose t so probability for inlier is p (e.g. 0.95)
    • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
  • Number of samples N
    • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)

Source: M. Pollefeys

choosing the parameters3
Choosing the parameters
  • Initial number of points s
    • Typically minimum number needed to fit the model
  • Distance threshold t
    • Choose t so probability for inlier is p (e.g. 0.95)
    • Zero-mean Gaussian noise with std. dev. σ: t2=3.84σ2
  • Number of samples N
    • Choose N so that, with probability p, at least one random sample is free from outliers (e.g. p=0.99) (outlier ratio: e)
  • Consensus set size d
    • Should match expected inlier ratio

Source: M. Pollefeys

adaptively determining the number of samples
Adaptively determining the number of samples
  • Inlier ratio e is often unknown a priori, so pick worst case, e.g. 50%, and adapt if more inliers are found, e.g. 80% would yield e=0.2
  • Adaptive procedure:
    • N=∞, sample_count =0
    • While N >sample_count
      • Choose a sample and count the number of inliers
      • Set e = 1 – (number of inliers)/(total number of points)
      • Recompute N from e:
      • Increment the sample_count by 1

Source: M. Pollefeys

ransac pros and cons
RANSAC pros and cons
  • Pros
    • Simple and general
    • Applicable to many different problems
    • Often works well in practice
  • Cons
    • Lots of parameters to tune
    • Doesn’t work well for low inlier ratios (too many iterations, or can fail completely)
    • Can’t always get a good initialization of the model based on the minimum number of samples
ad