1 / 29

Near Duplicate Image Detection: min-Hash and tf-idf weighting

Near Duplicate Image Detection: min-Hash and tf-idf weighting. Ond ř ej Chum Center for Machine Perception Czech Technical University in Prague co-authors: James Philbin and Andrew Zisserman. Outline. Near duplicate detection and large databases

ima
Download Presentation

Near Duplicate Image Detection: min-Hash and tf-idf weighting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Near Duplicate Image Detection:min-Hash and tf-idf weighting Ondřej Chum Center for Machine Perception Czech Technical University in Prague co-authors: James Philbin and Andrew Zisserman

  2. Outline • Near duplicate detection and large databases (find all groups of near duplicate images in a database) • min-Hash review • Novel similarity measures • Results on TrecVid 2006 • Results on the University of Kentucky database (Nister & Stewenius) • Beyond near duplicates

  3. Scalable Near Duplicate Image Detection • Images perceptually (almost) identical but not identical (noise, compression level, small motion, small occlusion) • Similar images of the same object / scene • Large databases • Fast – linear in the number of duplicates • Store small constant amount of data per image

  4. Image Representation 2 1 0 0 4 1 0 0 ... ... SIFT descriptor [Lowe’04] Feature detector Vector quantization … Visual vocabulary Bag of words Set of words

  5. min-Hash Min-Hash is a locality sensitive hashing (LSH) function m that selects elements m(A1) from set A1 and m(A2) from set A2 so that P{m(A1) == m(A2)} = sim (A1 , A2) Image similarity measured as a set overlap (using min-Hash algorithm) Spatially related images share visual words A1∩ A2 A1 A2 A1 U A2

  6. 0.41 0.19 0.31 0.90 0.22 0.94 0.59 0.55 0.88 0.75 0.63 0.07 3 6 2 5 4 1 C C F 1 2 6 3 5 4 B A A f3: 3 2 1 6 4 5 C C A f4: 4 3 5 6 1 2 B B E min-Hash Set C Vocabulary Set A Set B A B C D E F A B C B C D A E F Ordering min-Hash f1: ~ Un (0,1) f2: ~ Un (0,1) overlap (A,C) = 1/4 (1/5) overlap (B,C) = 0 (0) overlap (A,B) = 3/4 (1/2)

  7. min-Hash Retrieval A B Sketch collision } } A A sketch s-tuple of min-Hashes s – size of the sketch k – number of hash tables Probability of sketch collision Q C ... } } V V sim(A, B)s E E } } Probability of retrieval (at least one sketch collision) J Z Y Q 1 – (1 - sim(A, B)s)k k hash tables

  8. Probability of Retrieving an Image Pair Images of the same object Near duplicate images s = 3, k = 512 probability of retrieval Unrelated images similarity (set overlap)

  9. More Complex Similarity Measures

  10. 2 # documents idfW = log 0 # docs containing XW 4 0 ... t Document / Image / Object Retrieval Term Frequency – Inverse Document Frequency (tf-idf) weighting scheme [1] Baeza-Yates, Ribeiro-Neto. Modern Information Retrieval. ACM Press, 1999. [2] Sivic, Zisserman. Video Google: A text retrieval approach to object matching in videos. ICCV’03. [3] Nister, Stewenius. Scalable recognition with a vocabulary tree. CVPR’06. [4] Philbin, Chum, Isard, Sivic, Zisserman. Object retrieval with large vocabularies and fast spatial matching. CVPR’07. Words common to many documents are less informative Frequency of the words is recorded (good for repeated structures, textures, etc…)

  11. More Complex Similarity Measures • Set of words representation • Different importance of visual words • importance dw of word Xw • Bag of words representation • (frequency is recorded) • Histogram intersection similarity measure • Different importance of visual words • importance dw of word Xw

  12. A C E J Q R V Y Z AUB: dA dC dE dJ dQ dR dV dY dZ Word Weighting for min-Hash For hash function (set overlap similarity) all words Xw have the same chance to be a min-Hash For hash function the probability of Xw being a min-Hash is proportional to dw

  13. A B C D A1 A2 B1 B2 C1 C3 D1 C2 C3 A2 C2 C3 A1 B1 C1 B2 C2 B1 C1 D1 A1 A2 B1 B2 C1 C3 D1 C2 Histogram Intersection Using min-Hash Idea: represent a histogram as a set, use min-Hash set machinery Visual words: Bag of words A / set A’ Bag of words B / set B’ tA = (2,1,3,0) tB = (0,2,3,1) min-Hash vocabulary: A’UB’: Set overlap of A’ofB’ is a histogram intersection of A and B

  14. Results • Quality of the retrieval • Speed – the number of documents considered as near-duplicates

  15. TRECVid Challange • 165 hours of news footage, different channels, different countries • 146,588 key-frames, 352×240 pixels • No ground truth on near duplicates

  16. Min-Hash on TrecVid • DoG features • vocabulary of 64,635 visual words • 192 min-Hashes, 3 min-Hashes per a sketch, 64 sketches • similarity threshold 35% • Examples of images with 24 – 45 near duplicates • # common results / set overlap only / weighted set overlap only • Quality of the retrieval appears to be similar

  17. Comparison of Similarity Measures Images only sharing uninformative visual words do not generate sketch collisions for the proposed similarity measures Number of sketch collisions Set overlap Weighted set overlap Weighted histogram Image pair similarity

  18. University of Kentucky Dataset • 10,200 images in groups of four • Querying by each image in turn • Average number of correct retrievals in top 4 is measured

  19. Evaluation Vocabulary sizes 30k and 100k Number of min-Hashes 512, 640, 768, and 896 2 min-Hashes per sketch Number of sketches 0.5, 1, 2, and 3 times the number of min-Hashes Score on average: weighted histogram intersection 4.6 % better than weighted set overlap weighted set overlap 1.5 % better than set overlap Number of considered documents on average: weighted histogram intersection 1.7 times less than weighted set overlap weighted set overlap 1.5 times less than set overlap Absolute numbers for weighted histogram intersection: Retrieval tf-idf flat scoring [Nister & Stewenius] score 3.16 Number of considered documents (non-zero tf-idf) 10,089.9 (30k) and 9,659.4 (100k)

  20. Query Examples Query image: Results Set overlap, weighted set overlap, weighted histogram intersection

  21. Beyond Near Duplicate Detection

  22. Discovery of Spatially Related Images Find and match ALL groups (clusters) of spatially related images in a large database, using only visual information, i.e. not using (flicker) tags, EXIF info, GPS, …. Chum, Matas: Large Scale Discovery of Spatially Related Images, TR May 2008 available at http://cmp.felk.cvut.cz/~chum/Publ.htm

  23. Probability of Retrieving an Image Pair Images of the same object Near duplicate images probability of retrieval similarity (set overlap)

  24. Image Clusters as Connected Components • Randomized clustering method: • Seed Generation – hashing (fast, low recall) • characterize images by pseudo-random numbers stored in a • hash table time complexity equal to the sum of second • moments of Poisson random variable -- linear for database • size D ≈ 240 • 2. Seed Growing – retrieval (thorough – high recall) • complete the clusters only for cluster members c << D, • complexity O(cD)

  25. Clustering of 100k Images Images downloaded from FLICKR Includes 11 Oxford Landmarks with manually labelled ground truth All Soul's Hertford Ashmolean Keble Balliol Magdalen Bodleian Pitt Rivers Christ Church Radcliffe Camera Cornmarket

  26. Results on 100k Images Component Recall (CR) Number of images: 104,844 Timing: 17 min + 16 min = 0.019 sec / image Chum, Matas TR, May 2008

  27. Results on 100k Images Component Recall (CR) Number of images: 104,844 Timing: 17 min + 16 min = 0.019 sec / image 5,062 ? Philbin, Sivic, Zisserman BMVC 2008 Chum, Matas TR, May 2008

  28. Conclusions • New similarity measures were derived for the min-Hash framework • Weighted set overlap • Histogram intersection • Weighted histogram intersection • Experiments show that the similarity measures are superior to the state of the art • in the quality of the retrieval (up to 7% on University of Kentucky dataset) • in the speed of the retrieval (up to 2.5 times) • min-Hash is a very useful tool for randomized image clustering

  29. Thank you!

More Related