similarity metrics for categorization from monolithic to category specific n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Similarity Metrics for Categorization: From Monolithic to Category Specific PowerPoint Presentation
Download Presentation
Similarity Metrics for Categorization: From Monolithic to Category Specific

Loading in 2 Seconds...

play fullscreen
1 / 29

Similarity Metrics for Categorization: From Monolithic to Category Specific - PowerPoint PPT Presentation


  • 72 Views
  • Uploaded on

Similarity Metrics for Categorization: From Monolithic to Category Specific. Boris Babenko, Steve Branson, Serge Belongie University of California, San Diego ICCV 2009, Kyoto, Japan. Similarity Metrics for Recognition. Recognizing multiple categories

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Similarity Metrics for Categorization: From Monolithic to Category Specific


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
similarity metrics for categorization from monolithic to category specific

Similarity Metrics for Categorization:From Monolithic to Category Specific

Boris Babenko, Steve Branson, Serge Belongie

University of California, San Diego

ICCV 2009, Kyoto, Japan

similarity metrics for recognition
Similarity Metrics for Recognition
  • Recognizing multiple categories
    • Need meaningful similarity metric / feature space
similarity metrics for recognition1
Similarity Metrics for Recognition
  • Recognizing multiple categories
    • Need meaningful similarity metric / feature space
  • Idea: use training data to learn metric
    • Goes by many names:
      • metric learning
      • cue combination/weighting
      • kernel combination/learning
      • feature selection
similarity metrics for recognition2
Similarity Metrics for Recognition
  • Learn a single global similarity metric

Query Image

Similarity Metric

Labeled Dataset

Monolithic

Category 1

Category 2

Category 3

[ Jones et al. ‘03,

Chopra et al. ‘05,

Goldberger et al. ‘05,

Shakhnarovich et al. ‘05

Torralba et al. ‘08]

Category 4

similarity metrics for recognition3
Similarity Metrics for Recognition
  • Learn similarity metric for each category (1-vs-all)

Query Image

Similarity Metric

Labeled Dataset

Monolithic

Category 1

Category 2

Category 3

[ Varma et al. ‘07,

Frome et al. ‘07,

Weinberger et al. ‘08

Nilsback et al. ’08]

Category

Specific

Category 4

how many should we train
How many should we train?
  • Monolithic:
    • Less powerful… there is no “perfect” metric
    • Can generalize to new categories
  • Per category:
    • More powerful
    • Do we really need thousands of metrics?
    • Have to train for new categories
multiple similarity learning musl
Multiple Similarity Learning (MuSL)
  • Would like to explore space between two extremes
  • Idea:
    • Group categories together
    • Learn a few similarity metrics, one for each group
multiple similarity learning musl1
Multiple Similarity Learning (MuSL)
  • Learn a few good similarity metrics

Query Image

Similarity Metric

Labeled Dataset

Monolithic

Category 1

Category 2

MuSL

Category 3

Category

Specific

Category 4

review of boosting similarity
Review of Boosting Similarity
  • Need some framework to work with…
  • Boosting has many advantages:
    • Feature selection
    • Easy implementation
    • Performs well
notation
Notation
  • Training data:
  • Generate pairs:
    • Sample negative pairs

Images

Category Labels

( , ), 1

( , ), 0

boosting similarity
Boosting Similarity
  • Train similarity metric/classifier:
boosting similarity1
Boosting Similarity
  • Choose to be binary -- i.e.
  • = L1 distance over binary vectors
    • efficient to compute (XOR and sum)
  • For convenience:

[Shakhnarovich et al. ’05, Fergus et al. ‘08]

gradient boosting
Gradient Boosting
  • Given some objective function
  • Boosting = gradient ascent in function space
  • Gradient = example weights for boosting

chosen weak

classifier

current strong

classifier

other weak classifiers

function space

[Friedman ’01, Mason et al. ‘00]

musl boosting
MuSL Boosting
  • Goal: train and recover mapping
  • At runtime
    • To compute similarity of query image touse

Category 1

Category 2

Category 3

Category 4

na ve solution
Naïve Solution
  • Run pre-processing to group categories (i.e. k-means), then train as usual
  • Drawbacks:
    • Hacky / not elegant
    • Not optimal: pre-processing not informed by class confusions, etc.
  • How can we train & group simultaneously?
musl boosting1
MuSL Boosting
  • Definitions:

Sigmoid Function

Parameter

musl boosting2
MuSL Boosting
  • Definitions:
musl boosting3
MuSL Boosting
  • Definitions:

How well works with category

musl boosting4
MuSL Boosting
  • Objective function:
  • Each category “assigned” to classifier
approximating max
Approximating Max
  • Replace max with differentiable approx. where is a scalar parameter
pair weights
Pair Weights
  • Each training pair has weights
pair weights1
Pair Weights
  • Intuition:

Approximation of

Difficulty of pair

(like regular boosting)

evolution of weights
Evolution of Weights

Difficult Pair

Easy Pair

Assigned to

Assigned to

(boosting iteration)

(boosting iteration)

musl boosting algorithm
MuSL Boosting Algorithm

for

for

- Compute weights

- Train on weighted pairs

end

end

Assign

musl results
MuSL Results
  • Created dataset with many heterogeneous categories
  • Merged categories from:
  • Caltech 101 [Griffin et al.]
  • Oxford Flowers [Nilsback et al.]
  • UIUC Textures [Lazebnik et al.]
generalizing to new categories
Generalizing to New Categories

Training more metrics overfits!

conclusions
Conclusions
  • Studied categorization performance vs number of learned metrics
  • Presented boosting algorithm to simultaneously group categories and train metrics
  • Observed overfitting behavior for novel categories
thank you
Thank you!
  • Supported by
    • NSF CAREER Grant #0448615
    • NSF IGERT Grant DGE-0333451
    • ONR MURI Grant #N00014-08-1-0638
    • UCSD FWGrid Project (NSF Infrastructure Grant no. EIA-0303622)