discriminative and generative recognition n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Discriminative and Generative Recognition PowerPoint Presentation
Download Presentation
Discriminative and Generative Recognition

Loading in 2 Seconds...

play fullscreen
1 / 59

Discriminative and Generative Recognition - PowerPoint PPT Presentation


  • 109 Views
  • Uploaded on

CS 636 Computer Vision. Discriminative and Generative Recognition. Nathan Jacobs. Slides adapted from Lazebnik. Discriminative and generative methods for bags of features. Zebra. Non-zebra. Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba. Image classification.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Discriminative and Generative Recognition' - lot


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
discriminative and generative methods for bags of features
Discriminative and generative methods for bags of features

Zebra

Non-zebra

Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba

image classification
Image classification
  • Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
discriminative methods
Discriminative methods
  • Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes

Decisionboundary

Zebra

Non-zebra

classification
Classification
  • Assign input vector to one of two or more classes
  • Any decision rule divides input space into decision regions separated by decision boundaries
nearest neighbor classifier
Nearest Neighbor Classifier
  • Assign label of nearest training data point to each test data point

from Dudaet al.

Voronoi partitioning of feature space

for two-category 2D and 3D data

Source: D. Lowe

slide7

K-Nearest Neighbors

  • For a new point, find the k closest points from training data
  • Labels of the k points “vote” to classify
  • Works well provided there is lots of data and the distance function is good

k = 5

Source: D. Lowe

functions for comparing histograms
Functions for comparing histograms
  • L1 distance
  • χ2 distance
  • Quadratic distance (cross-bin)

Jan Puzicha, Yossi Rubner, Carlo Tomasi, Joachim M. Buhmann: Empirical Evaluation of Dissimilarity Measures for Color and Texture. ICCV 1999

earth mover s distance
Earth Mover’s Distance
  • Each image is represented by a signatureS consisting of a set of centers {mi } and weights {wi}
  • Centers can be codewords from universal vocabulary, clusters of features in the image, or individual features (in which case quantization is not required)
  • Earth Mover’s Distance has the formwhere the flowsfij are given by the solution of a transportation problem

Y. Rubner, C. Tomasi, and L. Guibas: A Metric for Distributions with Applications to Image Databases. ICCV 1998

moving earth
Moving Earth

Slides by P. Barnum

the difference
The Difference?

(amount moved)

=

the difference1
The Difference?

(amount moved) * (distance moved)

=

earth mover s distance1
Earth Mover’s Distance

Can be formulated as a linear program…

a transportation problem.

=

slide16

Y. Rubner, C. Tomasi, and L. J. Guibas. A Metric for Distributions with Applications to Image Databases. ICCV 1998

why might emd be better or worse than these
Why might EMD be better or worse than these?
  • L1 distance
  • χ2 distance
  • Quadratic distance (cross-bin)

Jan Puzicha, Yossi Rubner, Carlo Tomasi, Joachim M. Buhmann: Empirical Evaluation of Dissimilarity Measures for Color and Texture. ICCV 1999

slide18

recall: K-Nearest Neighbors

  • For a new point, find the k closest points from training data
  • Labels of the k points “vote” to classify
  • Works well provided there is lots of data and the distance function is good

k = 5

Source: D. Lowe

linear classifiers
Linear classifiers
  • Find linear function (hyperplane) to separate positive and negative examples

Why not just use KNN?

Which hyperplane is best?

support vector machines
Support vector machines
  • Find hyperplane that maximizes the margin between the positive and negative examples

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

support vector machines1
Support vector machines
  • Find hyperplane that maximizes the margin between the positive and negative examples

For support vectors,

Distance between point and hyperplane:

Therefore, the margin is 2 / ||w||

Support vectors

Margin

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

what if they aren t separable
What if they aren’t separable?

Quadratic optimization problem:

Subject to yi(w·xi+b) ≥ 1

separable

formulation

Quadratic optimization problem:

Subject to

yi(w·xi+b) ≥ 1 – si

si≥ 0

non-separable

formulation

introducing the kernel trick
Introducing the “Kernel Trick”
  • Notice:

learned weights

support vector

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

introducing the kernel trick1
Introducing the “Kernel Trick”
  • Recall:w·xi + b= yifor any support vector
  • Classification function (decision boundary):
  • Notice that it relies on an inner product between the testpoint xand the support vectors xi
  • Solving the optimization problem can also be done with only the inner products xi· xjbetween all pairs oftraining points

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

nonlinear svms

x

0

x

0

x2

Nonlinear SVMs
  • Datasets that are linearly separable work out great:
  • But what if the dataset is just too hard?
  • We can map it to a higher-dimensional space:

0

x

Slide credit: Andrew Moore

nonlinear svms1
Nonlinear SVMs
  • General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable:

Φ: x→φ(x)

Lifting Transformation

Slide credit: Andrew Moore

nonlinear svms2
Nonlinear SVMs
  • The kernel trick: instead of explicitly computing the lifting transformation φ(x), define a kernel function K such thatK(xi,xj) = φ(xi)· φ(xj)
  • This gives a nonlinear decision boundary in the original feature space:

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

kernels for bags of features
Kernels for bags of features
  • Histogram intersection kernel:
  • Generalized Gaussian kernel:
  • D can be Euclidean distance, χ2 distance, Earth Mover’s Distance, etc.

J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid, Local Features and Kernels for Classifcation of Texture and Object Categories: A Comprehensive Study, IJCV 2007

summary svms for image classification
Summary: SVMs for image classification
  • Pick an image representation (in our case, bag of features)
  • Pick a kernel function for that representation
  • Compute the matrix of kernel values between every pair of training examples
  • Feed the kernel matrix into your favorite SVM solver to obtain support vectors and weights
  • At test time: compute kernel values for your test example and each support vector, and combine them with the learned weights to get the value of the decision function
what about multi class svms
What about multi-class SVMs?
  • Unfortunately, there is no “definitive” multi-class SVM formulation
  • In practice, we have to obtain a multi-class SVM by combining multiple two-class SVMs
  • One vs. others
    • Traning: learn an SVM for each class vs. the others
    • Testing: apply each SVM to test example and assign to it the class of the SVM that returns the highest decision value
  • One vs. one
    • Training: learn an SVM for each pair of classes
    • Testing: each learned SVM “votes” for a class to assign to the test example
svms pros and cons
SVMs: Pros and cons
  • Pros
    • Many publicly available SVM packages:http://www.kernel-machines.org/software
    • Kernel-based framework is very powerful, flexible
    • SVMs work very well in practice, even with very small training sample sizes
  • Cons
    • No “direct” multi-class SVM, must combine two-class SVMs
    • Computation, memory
      • During training time, must compute matrix of kernel values for every pair of examples
      • Learning can take a very long time for large-scale problems
summary discriminative methods
Summary: Discriminative methods
  • Nearest-neighbor and k-nearest-neighbor classifiers
    • L1 distance, χ2 distance, quadratic distance, Earth Mover’s Distance
  • Support vector machines
    • Linear classifiers
    • Margin maximization
    • The kernel trick
    • Kernel functions: histogram intersection, generalized Gaussian, pyramid match
    • Multi-class
  • Of course, there are many other classifiers out there
    • Neural networks, boosting, decision trees, …
generative learning methods for bags of features

likelihood

prior

posterior

Generative learning methods for bags of features
  • Model the probability of a bag of features given a class

Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba

generative methods
Generative methods
  • We will cover two models, both inspired by text document analysis:
    • Naïve Bayes
    • Probabilistic Latent Semantic Analysis
slide35

The Naïve Bayes model

  • Start with the likelihood

Csurka et al. 2004

slide36

The Naïve Bayes model

  • Assume that each feature is conditionally independent given the class

fi: ith feature in the image

N: number of features in the image

Csurka et al. 2004

slide37

The Naïve Bayes model

  • Assume that each feature is conditionally independent given the class

fi: ith feature in the image

N: number of features in the image

wj: jth visual word in the vocabulary

M: size of visual vocabulary

n(wj): number of features of type wj in the image

Csurka et al. 2004

slide38

The Naïve Bayes model

  • Assume that each feature is conditionally independent given the class

No. of features of type wj in training images of class c

Total no. of features in training images of class c

p(wj| c) =

Csurka et al. 2004

slide39

The Naïve Bayes model

  • Assume that each feature is conditionally independent given the class

No. of features of type wj in training images of class c + 1

Total no. of features in training images of class c + M

p(wj| c) =

(psuedocounts to avoid zero counts)

Csurka et al. 2004

slide40

The Naïve Bayes model

  • Maximum A Posteriori decision:

(you should compute the log of the likelihood instead of the likelihood itself in order to avoid underflow)

Csurka et al. 2004

slide41

The Naïve Bayes model

  • “Graphical model”:

c

w

N

Csurka et al. 2004

slide42

Probabilistic Latent Semantic Analysis

= p1

+ p2

+ p3

Image

zebra

grass

tree

“visual topics”

T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999

slide43

z

d

w

Probabilistic Latent Semantic Analysis

  • Unsupervised technique
  • Two-level generative model: a document is a mixture of topics, and each topic has its own characteristic word distribution

document

topic

word

P(z|d)

P(w|z)

T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999

slide44

z

d

w

Probabilistic Latent Semantic Analysis

  • Unsupervised technique
  • Two-level generative model: a document is a mixture of topics, and each topic has its own characteristic word distribution

T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999

slide45

The pLSA model

Probability of word i giventopic k (unknown)

Probability of word iin document j(known)

Probability oftopic k givendocument j(unknown)

slide46

The pLSA model

documents

topics

documents

p(wi|dj)

p(wi|zk)

p(zk|dj)

words

words

topics

=

Class distributions

per image

(K×N)

Observed codeword

distributions

(M×N)

Codeword distributions

per topic (class)

(M×K)

slide47

Learning pLSA parameters

Maximize likelihood of data:

Observed counts of word i in document j

M … number of codewords

N … number of images

Slide credit: Josef Sivic

slide48

Inference

  • Finding the most likely topic (class) for an image:
slide49

Inference

  • Finding the most likely topic (class) for an image:
  • Finding the most likely topic (class) for a visual word in a given image:
topic discovery in images
Topic discovery in images

J. Sivic, B. Russell, A. Efros, A. Zisserman, B. Freeman, Discovering Objects and their Location in Images, ICCV 2005

application of plsa action recognition
Application of pLSA: Action recognition

Space-time interest points

Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, IJCV 2008.

application of plsa action recognition1
Application of pLSA: Action recognition

Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words, IJCV 2008.

plsa model
pLSA model
  • wi = spatial-temporal word
  • dj = video
  • n(wi, dj) = co-occurrence table (# of occurrences of word wi in video dj)
  • z = topic, corresponding to an action

Probability of word i giventopic k (unknown)

Probability oftopic k givenvideo j(unknown)

Probability of word iin video j(known)

summary generative models
Summary: Generative models
  • Naïve Bayes
    • Unigram models in document analysis
    • Assumes conditional independence of words given class
    • Parameter estimation: frequency counting
  • Probabilistic Latent Semantic Analysis
    • Unsupervised technique
    • Each document is a mixture of topics (image is a mixture of classes)
    • Can be thought of as matrix decomposition
    • Parameter estimation: Expectation-Maximization
summary
Summary
  • Recognition is the “grand challenge” of computer vision
  • History
    • Geometric methods
    • Appearance-based methods
    • Sliding window approaches
    • Local features
    • Parts-and-shape approaches
    • Bag-of-features approaches
  • Statistical recognition concepts
    • Generative vs. discriminative models
    • Generalization, overfitting, underfitting
    • Supervision
  • Tasks, datasets
bonus slides
Bonus Slides

Empirical Evaluation of Dissimilarity Measures for Color and Texture

Yossi Rubnera, Jan Puzichab, Carlo Tomasia and Joachim M. Buhmannb