1 / 119

More on Features

More on Features. Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/3/27. with slides by Trevor Darrell Cordelia Schmid , David Lowe, Darya Frolova, Denis Simakov , Robert Collins, Brad Osgood, W W L Chen, and Jiwon Kim. Announcements.

valora
Download Presentation

More on Features

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. More on Features Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/3/27 with slides by Trevor DarrellCordelia Schmid, David Lowe, Darya Frolova, Denis Simakov, Robert Collins, Brad Osgood, W W L Chen, and Jiwon Kim

  2. Announcements • Project #1 was due at noon today. You have a total of 10 delay days without penalty, but you are advised to use them wisely. • We reserve the rights for not including late homework for artifact voting. • Project #2 handout will be available on the web today. • We may not have class next week. I will send out mails if the class is canceled.

  3. Outline • Harris corner detector • SIFT • SIFT extensions • MSOP

  4. Three components for features • Feature detection • Feature description • Feature matching

  5. Harris corner detector

  6. Harris corner detector • Consider all small shifts by Taylor’s expansion

  7. Harris corner detector Equivalently, for small shifts [u,v] we have a bilinear approximation: , where M is a 22 matrix computed from image derivatives:

  8. Harris corner detector (matrix form)

  9. Quadratic forms • Quadratic form (homogeneous polynomial of degree two) of n variables xi • Examples =

  10. Symmetric matrices • Quadratic forms can be represented by a real symmetric matrix A where

  11. Eigenvalues of symmetric matrices Brad Osgood

  12. Eigenvectors of symmetric matrices

  13. Visualize quadratic functions

  14. Visualize quadratic functions

  15. Visualize quadratic functions

  16. Visualize quadratic functions

  17. Harris corner detector Intensity change in shifting window: eigenvalue analysis 1, 2 – eigenvalues of M direction of the fastest change Ellipse E(u,v) = const direction of the slowest change (max)-1/2 (min)-1/2

  18. Harris corner detector Classification of image points using eigenvalues of M: 2 edge 2 >> 1 Corner 1 and 2 are large,1 ~ 2;E increases in all directions 1 and 2 are small;E is almost constant in all directions edge 1 >> 2 flat 1

  19. Harris corner detector Measure of corner response: (k – empirical constant, k = 0.04-0.06)

  20. Harris corner detector

  21. Summary of Harris detector

  22. ) ( Now we know where features are • But, how to match them? • What is the descriptor for a feature? The simplest solution is the intensities of its spatial neighbors. This might not be robust to brightness change or small shift/rotation.

  23. Harris Detector: Some Properties • Rotation invariance Ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner response R is invariant to image rotation

  24. Harris Detector: Some Properties • But: non-invariant to image scale! All points will be classified as edges Corner !

  25. Scale invariant detection • The problem: how do we choose corresponding circles independently in each image? • Aperture problem

  26. SIFT (Scale Invariant Feature Transform)

  27. SIFT • SIFT is an carefully designed procedure with empirically determined parameters for the invariant and distinctive features.

  28. descriptor detector ( ) local descriptor SIFT stages: • Scale-space extrema detection • Keypoint localization • Orientation assignment • Keypoint descriptor matching A 500x500 image gives about 2000 features

  29. 1. Detection of scale-space extrema • For scale invariance, search for stable features across all possible scales using a continuous function of scale, scale space. • SIFT uses DoG filter for scale space because it is efficient and as stable as scale-normalized Laplacian of Gaussian.

  30. Convolution with a variable-scale Gaussian DoG filtering Difference-of-Gaussian (DoG) filter Convolution with the DoG filter

  31. Scale space  doubles for the next octave K=2(1/s) Dividing into octave is for efficiency only.

  32. Detection of scale-space extrema

  33. Keypoint localization X is selected if it is larger or smaller than all 26 neighbors

  34. Decide scale sampling frequency • It is impossible to sample the whole space, tradeoff efficiency with completeness. • Decide the best sampling frequency by experimenting on 32 real image subject to synthetic transformations. (rotation, scaling, affine stretch, brightness and contrast change, adding noise…)

  35. Decide scale sampling frequency for detector, repeatability for descriptor, distinctiveness S=3, for larger s, too many unstable features

  36. Decide scale sampling frequency

  37. Pre-smoothing  =1.6, plus a double expansion

  38. Scale invariance

  39. 2. Accurate keypoint localization • Reject points with low contrast (flat) and poorly localized along an edge (edge) • Fit a 3D quadratic function for sub-pixel maxima 6 5 1 -1 +1 0

  40. 2. Accurate keypoint localization • Reject points with low contrast and poorly localized along an edge • Fit a 3D quadratic function for sub-pixel maxima 6 5 1 -1 +1 0

  41. 2. Accurate keypoint localization • Taylor series of several variables • Two variables

  42. Accurate keypoint localization • Taylor expansion in in matrix form, x is a vector, f maps x to a scalar Hessian matrix (often symmetric) gradient

  43. 2D illustration

  44. -17 -1 -1 -9 7 7 7 7 -9 2D example

  45. Derivation of matrix form

  46. Derivation of matrix form

  47. Derivation of matrix form

  48. Accurate keypoint localization • x is a 3-vector • Change sample point if offset is larger than 0.5 • Throw out low contrast (<0.03)

  49. Accurate keypoint localization • Throw out low contrast

  50. Eliminating edge responses Hessian matrix at keypoint location Let Keep the points with r=10

More Related