1 / 17

Digital Image Processing Lecture 25: Object Recognition June 15, 2005

Digital Image Processing Lecture 25: Object Recognition June 15, 2005. Prof. Charlene Tsai. Review. Matching Specified by the mean vector of each class Optimum statistical classifiers Probabilistic approach Bayes classifier for Gaussian pattern classes

Download Presentation

Digital Image Processing Lecture 25: Object Recognition June 15, 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Digital Image Processing Lecture 25: Object Recognition June 15, 2005 Prof. Charlene Tsai

  2. Review • Matching • Specified by the mean vector of each class • Optimum statistical classifiers • Probabilistic approach • Bayes classifier for Gaussian pattern classes • Specified by mean vector and covariance matrix of each class • Neural network

  3. Loss incurred if x actually came from , but assigned to Foundation • Probability that x comes from class is • Average loss/risk incurred in assigning x to • Using basic probability theory, we get p(A/B)p(B)=p(B/A)p(A)

  4. (con’d) • Because 1/p(x) is positive and common to all rj(x), so it can be dropped w/o affecting the comparison among rj(x) • The classifier assigns x to the class with the smallest average loss --- Bayes classifier Eqn#1

  5. The Loss Function (Lij) • 0 loss for correct decision, and same nonzero value (say 1) for any incorrect decision. where Eqn#2

  6. Bayes Classifier • Substituting eqn#2 into eqn#1 yields • The classifier assigns x to class if for all p(x) is common to all classes, so is dropped

  7. Decision Function • Using Bayes classifier for a 0-1 loss function, the decision function for is • Now the questions are • How to get ? • How to estimate ?

  8. Using Gaussian Distribution • Most prevalent form (assumed) for is the Gaussian probability density function. • Now consider a 1D problem with 2 pattern classes (W=2) mean variance

  9. Example Where is the decision if 1. 2. 3.

  10. N-D Gaussian • For jth pattern class, where, Remember this from Principle component Analysis?

  11. (con’t) • Working with the logarithm of the decision function: • If all covariance matrices are equal, then Common covariance

  12. For C=I • If C=I (identity matrix) and is 1/W, we get which is the minimum distance classifier • Gaussian pattern classes satisfying these conditions are spherical clouds of identical shape in N-D.

  13. Example in Gonzalez (pg709) Decision boundary

  14. Dropping , which is common to all classes (con’t) • Assuming • We get • The decision surface is

  15. Neural Network • Simulating the brain activity in which the elemental computing elements are treated as the neurons. • The trend of research dates back to early 1940s. • The perceptron learn a linear decision function that separate 2 training sets.

  16. Perceptron for 2 Pattern Classes

  17. (con’t) • The coefficients wi are the weights, which are analogous to synapses in the human neural system. • When d(x)>0, the output is +1, and the x pattern belongs to . The reverse is true when d(x)<0. • This is as far as we go. • This concept has be adopted in many real systems, when the underlying distributions are unknown.

More Related