agenda l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Agenda PowerPoint Presentation
Download Presentation
Agenda

Loading in 2 Seconds...

play fullscreen
1 / 39

Agenda - PowerPoint PPT Presentation


  • 139 Views
  • Uploaded on

Agenda. Introduction Bag-of-words model Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based image retrieval Datasets & Conclusions. Object. Bag of ‘words’.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Agenda' - lysandra


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
agenda
Agenda
  • Introduction
  • Bag-of-words model
  • Visual words with spatial location
  • Part-based models
  • Discriminative methods
  • Segmentation and recognition
  • Recognition-based image retrieval
  • Datasets & Conclusions
slide2

Object

Bag of ‘words’

analogy to documents

China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value.

sensory, brain,

visual, perception,

retinal, cerebral cortex,

eye, cell, optical

nerve, image

Hubel, Wiesel

China, trade,

surplus, commerce,

exports, imports, US,

yuan, bank, domestic,

foreign, increase,

trade, value

Analogy to documents

Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image.

a clarification definition of bow
A clarification: definition of “BoW”
  • Independent features
  • Histogram representation of image
  • Discrete appearance representation
slide5

Representation

codewords dictionary

feature detection

& representation

image representation

2.

1.

3.

1 feature detection and representation7
1.Feature detection and representation
  • Regular grid
    • Vogel & Schiele, 2003
    • Fei-Fei & Perona, 2005
1 feature detection and representation8
1.Feature detection and representation
  • Regular grid
    • Vogel & Schiele, 2003
    • Fei-Fei & Perona, 2005
  • Interest point detector
    • Csurka, et al. 2004
    • Fei-Fei & Perona, 2005
    • Sivic, et al. 2005
1 feature detection and representation9
1.Feature detectionand representation

Compute SIFT descriptor

[Lowe’99]

Normalize patch

Detect patches

[Mikojaczyk and Schmid ’02]

[Mata, Chum, Urban & Pajdla, ’02]

[Sivic & Zisserman, ’03]

Slide credit: Josef Sivic

slide10

1.Feature detectionand representation

2 codewords dictionary formation12

2. Codewords dictionary formation

Vector quantization

Slide credit: Josef Sivic

slide15

…..

3. Image representation

frequency

codewords

slide16

Representation

codewords dictionary

feature detection

& representation

image representation

2.

1.

3.

category models

(and/or) classifiers

slide17

Learning and Recognition

codewords dictionary

category

decision

category models

(and/or) classifiers

slide18

Learning and Recognition

  • Generative method:
    • - topic models
  • Discriminative method:
    • - SVM

category models

(and/or) classifiers

slide19

Probabilistic Latent Semantic Analysis (pLSA)

  • Background: Hoffman, 2001Blei, Ng & Jordan, 2004  Latent Dirichlet Allocation
  • Object categorization:

Sivic et al. 2005Sudderth et al. 2005

  • Natural scene categorization: Fei-Fei et al. 2005

In this case, use it for unsupervised learningfrom image collections

slide20

Probabilistic Latent Semantic Analysis

z

d

w

N

D

“face”

dj: the jth image in an image collection

z: latent theme or topic of the patch

N: number of patches per image

wi: visual word of patch

Sivic et al. ICCV 2005

slide21

z

d

w

N

D

Feature detection and representation

Image collection

d

w

P(wi|dj)

slide22

z

d

w

N

D

Observed codeword

distributions

Theme distributions

per image

Codeword distributions

per theme (topic)

The pLSA model

Slide credit: Josef Sivic

slide23

Learning the pLSA parameters

Observed counts of word i in document j

Maximize likelihood of data using EM

M … number of codewords

N … number of images

Slide credit: Josef Sivic

slide24

Recognition using pLSA

Slide credit: Josef Sivic

slide25

Demo

  • Course website
slide27

Demo: learnt parameters

  • Learning the model: do_plsa(‘config_file_1’)
  • Evaluate and visualize the model: do_plsa_evaluation(‘config_file_1’)

Codeword distributions

per theme (topic)

Theme distributions

per image

slide29

Learning and Recognition

  • Generative method:
    • - topic models
  • Discriminative method:
    • - SVM

category models

(and/or) classifiers

discriminative methods based on bag of words representation
Discriminative methods based on ‘bag of words’ representation
  • Grauman & Darrell, 2005, 2006:
    • SVM w/ Pyramid Match kernels
  • Others
    • Csurka, Bray, Dance & Fan, 2004
    • Serre & Poggio, 2005
summary pyramid match kernel
Summary: Pyramid match kernel

optimal partial matching between sets of features

  • Pyramid is in feature space, spatial information not used
  • Efficient to compute – linear in # features/image
  • Satisfies Mercer Condition, so can be used as a kernel in an SVM

Grauman & Darrell, 2005, Slide credit: Kristen Grauman

pyramid match grauman darrell 2005
Pyramid Match (Grauman & Darrell 2005)

Histogram intersection

Slide credit: Kristen Grauman

pyramid match grauman darrell 200533

matches at this level

matches at previous level

Difference in histogram intersections across levels counts number ofnew pairs matched

Pyramid Match (Grauman & Darrell 2005)

Histogram intersection

Slide credit: Kristen Grauman

pyramid match kernel

histogram pyramids

number of newly matched pairs at level i

measure of difficulty of a match at level i

Pyramid match kernel
  • Weights inversely proportional to bin size
  • Normalize kernel values to avoid favoring large sets

Slide credit: Kristen Grauman

slide35

Example pyramid match

Level 0

Slide credit: Kristen Grauman

slide36

Example pyramid match

Level 1

Slide credit: Kristen Grauman

slide37

Example pyramid match

Level 2

Slide credit: Kristen Grauman

example pyramid match
Example pyramid match

pyramid match

optimal match

Slide credit: Kristen Grauman

object recognition results
Object recognition results
  • ETH-80 database 8 object classes

(Eichhorn and Chapelle 2004)

  • Features:
    • Harris detector
    • PCA-SIFT descriptor, d=10

d = descriptor dim. ; m = # features ; L = # levels in pyramid

Slide credit: Kristen Grauman