Part 1 bag of words models
This presentation is the property of its rightful owner.
Sponsored Links
1 / 48

Part 1: Bag-of-words models PowerPoint PPT Presentation


  • 156 Views
  • Uploaded on
  • Presentation posted in: General

Part 1: Bag-of-words models. by Li Fei-Fei (UIUC). Related works. Early “bag of words” models: mostly texture recognition Cula et al. 2001; Leung et al. 2001; Schmid 2001; Varma et al. 2002, 2003; Lazebnik et al. 2003 Hierarchical Bayesian models for documents (pLSA, LDA, etc.)

Download Presentation

Part 1: Bag-of-words models

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Part 1 bag of words models

Part 1: Bag-of-words models

by Li Fei-Fei (UIUC)


Related works

Related works

  • Early “bag of words” models: mostly texture recognition

    • Cula et al. 2001; Leung et al. 2001; Schmid 2001; Varma et al. 2002, 2003; Lazebnik et al. 2003

  • Hierarchical Bayesian models for documents (pLSA, LDA, etc.)

    • Hoffman 1999; Blei et al, 2004; Teh et al. 2004

  • Object categorization

    • Dorko et al. 2004; Csurka et al. 2003; Sivic et al. 2005; Sudderth et al. 2005;

  • Natural scene categorization

    • Fei-Fei et al. 2005


Part 1 bag of words models

Object

Bag of ‘words’


Analogy to documents

China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value.

sensory, brain,

visual, perception,

retinal, cerebral cortex,

eye, cell, optical

nerve, image

Hubel, Wiesel

China, trade,

surplus, commerce,

exports, imports, US,

yuan, bank, domestic,

foreign, increase,

trade, value

Analogy to documents

Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image.


Part 1 bag of words models

learning

recognition

codewords dictionary

feature detection

& representation

image representation

category

decision

category models

(and/or) classifiers


Part 1 bag of words models

Representation

codewords dictionary

feature detection

& representation

image representation

2.

1.

3.


1 feature detection and representation

1.Feature detection and representation


1 feature detection and representation1

1.Feature detection and representation

  • Regular grid

    • Vogel et al. 2003

    • Fei-Fei et al. 2005


1 feature detection and representation2

1.Feature detection and representation

  • Regular grid

    • Vogel et al. 2003

    • Fei-Fei et al. 2005

  • Interest point detector

    • Csurka et al. 2004

    • Fei-Fei et al. 2005

    • Sivic et al. 2005


1 feature detection and representation3

1.Feature detection and representation

  • Regular grid

    • Vogel et al. 2003

    • Fei-Fei et al. 2005

  • Interest point detector

    • Csurka et al. 2004

    • Fei-Fei et al. 2005

    • Sivic et al. 2005

  • Other methods

    • Random sampling (Ullman et al. 2002)

    • Segmentation based patches (Barnard et al. 2003)


1 feature detection and representation4

1.Feature detectionand representation

Compute SIFT descriptor

[Lowe’99]

Normalize patch

Detect patches

[Mikojaczyk and Schmid ’02]

[Matas et al. ’02]

[Sivic et al. ’03]

Slide credit: Josef Sivic


Part 1 bag of words models

1.Feature detectionand representation


2 codewords dictionary formation

2. Codewords dictionary formation


2 codewords dictionary formation1

2. Codewords dictionary formation

Vector quantization

Slide credit: Josef Sivic


Part 1 bag of words models

2. Codewords dictionary formation

Fei-Fei et al. 2005


Part 1 bag of words models

Image patch examples of codewords

Sivic et al. 2005


Part 1 bag of words models

…..

3. Image representation

frequency

codewords


Part 1 bag of words models

Representation

codewords dictionary

feature detection

& representation

image representation

2.

1.

3.


Part 1 bag of words models

Learning and Recognition

codewords dictionary

category

decision

category models

(and/or) classifiers


Part 1 bag of words models

2 case studies

  • Naïve Bayes classifier

    • Csurka et al. 2004

  • Hierarchical Bayesian text models (pLSA and LDA)

    • Background: Hoffman 2001, Blei et al. 2004

    • Object categorization: Sivic et al. 2005, Sudderth et al. 2005

    • Natural scene categorization: Fei-Fei et al. 2005


Part 1 bag of words models

First, some notations

  • wn: each patch in an image

    • wn = [0,0,…1,…,0,0]T

  • w: a collection of all N patches in an image

    • w = [w1,w2,…,wN]

  • dj: the jth image in an image collection

  • c: category of the image

  • z: theme or topic of the patch


Part 1 bag of words models

Object class

decision

Prior prob. of

the object classes

Image likelihood

given the class

Case #1: the Naïve Bayes model

c

w

N

Csurka et al. 2004


Part 1 bag of words models

Csurka et al. 2004


Part 1 bag of words models

Csurka et al. 2004


Part 1 bag of words models

Case #2: Hierarchical Bayesian

text models

z

d

w

N

D

z

c

w

N

D

Probabilistic Latent Semantic Analysis (pLSA)

Hoffman, 2001

Latent Dirichlet Allocation (LDA)

Blei et al., 2001


Part 1 bag of words models

Case #2: Hierarchical Bayesian

text models

z

d

w

N

D

“face”

Probabilistic Latent Semantic Analysis (pLSA)

Sivic et al. ICCV 2005


Part 1 bag of words models

Case #2: Hierarchical Bayesian

text models

“beach”

z

c

w

N

D

Latent Dirichlet Allocation (LDA)

Fei-Fei et al. ICCV 2005


Part 1 bag of words models

z

d

w

N

D

Case #2: the pLSA model


Part 1 bag of words models

z

d

w

N

D

Observed codeword

distributions

Theme distributions

per image

Codeword distributions

per theme (topic)

Case #2: the pLSA model

Slide credit: Josef Sivic


Part 1 bag of words models

Case #2: Recognition using pLSA

Slide credit: Josef Sivic


Part 1 bag of words models

Case #2: Learning the pLSA parameters

Observed counts of word i in document j

Maximize likelihood of data using EM

M … number of codewords

N … number of images

Slide credit: Josef Sivic


Part 1 bag of words models

Demo

  • Course website


Part 1 bag of words models

task: face detection – no labeling


Part 1 bag of words models

Demo: feature detection

  • Output of crude feature detector

    • Find edges

    • Draw points randomly from edge set

    • Draw from uniform distribution to get scale


Part 1 bag of words models

Demo: learnt parameters

  • Learning the model: do_plsa(‘config_file_1’)

  • Evaluate and visualize the model: do_plsa_evaluation(‘config_file_1’)

Codeword distributions

per theme (topic)

Theme distributions

per image


Part 1 bag of words models

Demo: recognition examples


Part 1 bag of words models

Demo: categorization results

  • Performance of each theme


Part 1 bag of words models

Demo: naïve Bayes

  • Learning the model: do_naive_bayes(‘config_file_2’)

  • Evaluate and visualize the model: do_naive_bayes_evaluation(‘config_file_2’)


Part 1 bag of words models

Learning and Recognition

codewords dictionary

category

decision

category models

(and/or) classifiers


Invariance issues

Invariance issues

  • Scale and rotation

    • Implicit

    • Detectors and descriptors

Kadir and Brady. 2003


Invariance issues1

Invariance issues

  • Scale and rotation

  • Occlusion

    • Implicit in the models

    • Codeword distribution: small variations

    • (In theory) Theme (z) distribution: different occlusion patterns


Invariance issues2

Invariance issues

  • Scale and rotation

  • Occlusion

  • Translation

    • Encode (relative) location information

Sudderth et al. 2005


Invariance issues3

Invariance issues

  • Scale and rotation

  • Occlusion

  • Translation

  • View point (in theory)

    • Codewords: detector and descriptor

    • Theme distributions: different view points

Fergus et al. 2005


Part 1 bag of words models

Intuitive

Analogy to documents

Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image.

sensory, brain,

visual, perception,

retinal, cerebral cortex,

eye, cell, optical

nerve, image

Hubel, Wiesel

Model properties


Part 1 bag of words models

Intuitive

(Could use) generative models

Convenient for weakly- or un-supervised training

Prior information

Hierarchical Bayesian framework

Model properties

Sivic et al., 2005, Sudderth et al., 2005


Part 1 bag of words models

Intuitive

(Could use) generative models

Learning and recognition relatively fast

Compare to other methods

Model properties


Part 1 bag of words models

Weakness of the model

  • No rigorous geometric information of the object components

  • It’s intuitive to most of us that objects are made of parts – no such information

  • Not extensively tested yet for

    • View point invariance

    • Scale invariance

  • Segmentation and localization unclear


  • Login