The Power of Comparative Reasoning
This presentation is the property of its rightful owner.
Sponsored Links
1 / 11

Presented by Relja Arandjelovi ć PowerPoint PPT Presentation


  • 90 Views
  • Uploaded on
  • Presentation posted in: General

The Power of Comparative Reasoning. Jay Yagnik , Dennis Strelow , David Ross, Ruei -sung Lin @ Google ICCV 2011. Presented by Relja Arandjelovi ć. 29 th November 2011. University of Oxford. Overview. Ordinal embedding of features based on partial order statistics

Download Presentation

Presented by Relja Arandjelovi ć

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Presented by relja arandjelovi

The Power of Comparative Reasoning

Jay Yagnik, Dennis Strelow, David Ross, Ruei-sung Lin

@ Google

ICCV 2011

Presented by Relja Arandjelović

29th November 2011

University of Oxford


Overview

Overview

  • Ordinal embedding of features based on partial order statistics

    • Non-linear embedding

    • Simple extension for polynomial kernels

  • Data independent

  • Very easy to implement


Presented by relja arandjelovi

Idea

  • Compare feature vectors based on the order of dimensions, sorted by magnitude

    • Ranking is invariant to constant offset, scaling, small noise

    • Use local ordering statistics; example pair-wise measure:

  • WTA (Winner Takes All) hashing scheme produces vectors comparable via Hamming distance.

    • The distance approximates:

    • For K=2,


Similarity function

Similarity function


Winner takes all wta

Winner Takes All (WTA)


K parameter

K parameter

  • Increasing K biases the similarity towards the top of the list


Wta with polynomial kernel

WTA with polynomial kernel

  • Simple to do WTA on the polynomial expansion of the feature space

  • Computed in O(p), where p is the polynomial kernel degree


Results descriptor matching sift daisy

Results: Descriptor matching (SIFT / DAISY)

  • Descriptor matching task, Liberty dataset

  • K=2, 10k binary codes

  • RAW: +11.6%

  • SIFT: +10.4%

  • DAISY: +11.2%

  • Note: SIFT is 128-D so there are 8128 possible pairs, might as well compute PO exactly in this case; similar for 200-D DAISY

  • I tried briefly for SIFT on a different task: works


Results voc

Results: VOC

  • VOC 2010

  • Bag-of-words of their descriptor based on Gabor wavelet responses

  • K=4

  • Linear SVM

  • χ2 for 1000-D: 40.1%

  • WTA for 1000-D: +2%


Results image retrieval

Results: Image retrieval

  • LabelMe dataset: 13,500 images; 512-D Gist descriptor

  • K=4, p=4


Conclusions

Conclusions

  • Partial order statistics could be a good way to compare vectors

  • Data independent: no training stage

  • Non-linear embedding: could use a linear SVM in this space

  • Simple to implement and try out

  • My note for SIFT/DAISY:

    • Can just discard all this hashing stuff and encode all pair-wise relations


  • Login