1 / 18

Efficiently searching for similar images ( Kristen Grauman )

Efficiently searching for similar images ( Kristen Grauman ). Universidad Católica San Pablo Cristina Patricia Cáceres Jáuregui cristina.caceres.jauregui@ucsp.edu.pe. Motivation. Fast image search is a useful component for a number of vision problems.

rhoda
Download Presentation

Efficiently searching for similar images ( Kristen Grauman )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Efficiently searching for similar images (KristenGrauman) Universidad Católica San Pablo Cristina Patricia Cáceres Jáuregui cristina.caceres.jauregui@ucsp.edu.pe

  2. Motivation Fast image search is a useful component for a number of vision problems. Plenty of nuisance parameters (lighting, pose, background clutter, etc.)

  3. Nuisance parameters

  4. Outline Scalable image search • Fast correspondence-based search with local features • Fast similarity search for learned metrics

  5. Local image features

  6. How to handle sets of features? • Want to compare, index, cluster, etc. local • representations, but: • • Each instance is unordered set of vectors • • Varying number of vectors per instance

  7. Comparing sets of local features Previous strategies: • Match features individually, vote on small sets to verify • Explicit search for one-to-one correspondences • Bag-of-words: Compare frequencies of prototype features

  8. optimal partial matching Pyramid match kernel Optimal match: O(m3) Pyramid match: O(mL) m = # features L = # levels in pyramid

  9. Pyramid match: main idea Feature space partitions serve to “match” the local descriptors within successively wider regions. descriptor space

  10. Pyramid match: main idea Histogram intersection counts number of possible matches at a given partitioning.

  11. Image search with matching- sensitive hash functions • Main idea: – Map point sets to a vector space in such a way that a dot product reflects partial match similarity (normalized PMK value). – Exploit random hyperplane properties to construct matching-sensitive hash functions. – Perform approximate similarity search on hashed examples.

  12. Locality Sensitive Hashing (LSH) N Xi h h r1…rk r1…rk Q Guarantee “approximate”-nearest neighbors in sub-linear time, given appropriate hash functions. Randomized LSH functions << N 110101 110111 Q 111101

  13. LSH functions for dot products The probability that a randomhyperplane separates two unit vectors depends on the angle between them: Corresponding hash function: A) High dot product: unlikely to split B) Lower dot product: likely to split

  14. Metric learning There are various ways to judge appearance/shape similarity… but often we know more about (some) data than just their appearance.

  15. Metric learning • Exploit partially labeled data and/or (dis)similarity constraints to construct more useful distance function • Can dramatically boost performance on clustering, indexing, classification tasks. • Various existing techniques

  16. Fast similarity search for learned metrics • Goal: – Maintain query time guarantees while performing approximate search with a learned metric • Main idea: – Learn Mahalanobis distance parameterization – Use it to affect distribution from which random hash functions are selected • LSH functions that preserve the learned metric • Approximate NN search with existing methods

  17. Fast Image Search for Learned Metrics Learn a Malhanobis metric for LSH It should be unlikely that a hash function will split examples like those having similarity constraints… …but likely that it splits those having dissimilarity constraints. h( ) = h( ) h( ) ≠h( )

  18. Summary • Local image features useful, important to handle efficiently • Introduced scalable methods to allow fast similarity search methods with – Local feature matching – Learned Mahalanobis metrics • Key idea: design hash functions that encode matching process, or the constraints provided

More Related