- 100 Views
- Uploaded on
- Presentation posted in: General

COMPE 467 - Pattern Recognition

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Bayesian Decision Theory

COMPE 467 - Pattern Recognition

- Covariance matrices for all of the classes are identical,
- But covariance matrices are arbitrary.

and are independent of i.

So,

- Covariance matrices are different for each category.

only is independent of i.

So,

Find the discriminant function for the first class.

Find the discriminant function for the first class.

Similarly, find the discriminant function for the second class.

The decision boundary:

The decision boundary:

Using MATLAB we can draw the decision boundary:

(to draw the decision boundary in MATLAB)

>> s = 'x^2-10*x-4*x*y+8*y-1+2*log(2)';

>> ezplot(s)

Using MATLAB we can draw the decision boundary:

- Another measure of distance between two Gaussian distributions.
- found a great use in medicine, radar detection and other fields.

- If both diagnosis and test are positive,it is called a true positive. The probability of a TP to occur is estimated by counting thetrue positives in the sample and divide by the sample size.
- If the diagnosis is positive andthe test is negative it is called a false negative (FN).
- False positive (FP) and true negative(TN) are defined similarly.

- The values described are used to calculate different measurements of the quality of thetest.
- The first one is sensitivity, SE, which is the probability of having a positive test among the patients who have a positive diagnosis.

- Specificity, SP, is the probability of having a negative test among the patients who have anegative diagnosis.

- Example:

- Example (cont.):

- Overlap in distributions:

- Assume x = (x1.......xd) takes only m discrete values
- Components of x are binary or integer valued, x can take only one of m discrete values
v1, v2, …, vm

- Maximize class posterior using bayes decision rule
P(1 | x) = P(x | j) . P (j)

S P(x | j) . P (j)

Decide i if P(i | x) > P(i | x)for all i ≠ j

or

- Minimize the overall risk by selecting the action i = *: deciding 2
* = arg min R(i | x)

Bit – matrix for machine – printed characters

a pixel

Here, each pixel may be taken as a feature

For above problem, we have

is the probabilty that for letter A,B,…

B

A

- To minimize the overall risk, choose the action thatminimizes the conditional risk R (α |x).
- To minimize the probability of error, choose the class thatmaximizes the posterior probability P (wj |x).
- If there are different penalties for misclassifying patternsfrom different classes, the posteriors must be weightedaccording to such penalties before taking action.
- Do not forget that these decisions are the optimal onesunder the assumption that the “true” values of theprobabilities are known.

- R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, New York: John Wiley, 2001.
- Selim Aksoy, Pattern Recognition Course Materials, 2011.
- M. Narasimha Murty, V. Susheela Devi, Pattern Recognition an Algorithmic Approach, Springer, 2011.
- “Bayes Decision Rule Discrete Features”, http://www.cim.mcgill.ca/~friggi/bayes/decision/binaryfeatures.html, access: 2011.