1 / 32

Chapter 2 (part 3) Bayesian Decision Theory (Sections 2-6,2-9)

Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000 with the permission of the authors and the publisher. Chapter 2 (part 3) Bayesian Decision Theory (Sections 2-6,2-9).

infinity
Download Presentation

Chapter 2 (part 3) Bayesian Decision Theory (Sections 2-6,2-9)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pattern ClassificationAll materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000with the permission of the authors and the publisher

  2. Chapter 2 (part 3)Bayesian Decision Theory (Sections 2-6,2-9) Discriminant Functions for the Normal Density Bayes Decision Theory – Discrete Features

  3. Discriminant Functions for the Normal Density • We saw that the minimum error-rate classification can be achieved by the discriminant function gi(x) = ln P(x | i) + ln P(i) • Case of multivariate normal Pattern Classification, Chapter 2 (Part 3)

  4. Case i = 2.I(I stands for the identity matrix) Pattern Classification, Chapter 2 (Part 3)

  5. A classifier that uses linear discriminant functions is called “a linear machine” • The decision surfaces for a linear machine are pieces of hyperplanes defined by: gi(x) = gj(x) Pattern Classification, Chapter 2 (Part 3)

  6. Pattern Classification, Chapter 2 (Part 3)

  7. The hyperplane separatingRiand Rj always orthogonal to the line linking the means! Pattern Classification, Chapter 2 (Part 3)

  8. Pattern Classification, Chapter 2 (Part 3)

  9. Pattern Classification, Chapter 2 (Part 3)

  10. Case i =  (covariance of all classes are identical but arbitrary!) • Hyperplane separating Ri and Rj (the hyperplane separating Ri and Rj is generally not orthogonal to the line between the means!) Pattern Classification, Chapter 2 (Part 3)

  11. Pattern Classification, Chapter 2 (Part 3)

  12. Pattern Classification, Chapter 2 (Part 3)

  13. Case i = arbitrary • The covariance matrices are different for each category (Hyperquadrics which are: hyperplanes, pairs of hyperplanes, hyperspheres, hyperellipsoids, hyperparaboloids, hyperhyperboloids) Pattern Classification, Chapter 2 (Part 3)

  14. Pattern Classification, Chapter 2 (Part 3)

  15. Pattern Classification, Chapter 2 (Part 3)

  16. Bayes Decision Theory – Discrete Features • Components of x are binary or integer valued, x can take only one of m discrete values v1, v2, …, vm • Case of independent binary features in 2 category problem Let x = [x1, x2, …, xd ]twhere each xiis either 0 or 1, with probabilities: pi = P(xi = 1 | 1) qi = P(xi = 1 | 2) Pattern Classification, Chapter 2 (Part 3)

  17. The discriminant function in this case is: Pattern Classification, Chapter 2 (Part 3)

  18. Bayesian Belief Network • Features • Causal relationships • Statistically independent • Bayesian belief nets • Causal networks • Belief nets Pattern Classification, Chapter 2 (Part 3)

  19. x1 and x3 are independent Pattern Classification, Chapter 2 (Part 3)

  20. Structure • Node • Discrete variables • Parent, Child Nodes • Direct influence • Conditional Probability Table • Set by expert or by learning from training set • (Sorry, learning is not discussed here) Pattern Classification, Chapter 2 (Part 3)

  21. Pattern Classification, Chapter 2 (Part 3)

  22. Examples Pattern Classification, Chapter 2 (Part 3)

  23. Pattern Classification, Chapter 2 (Part 3)

  24. Pattern Classification, Chapter 2 (Part 3)

  25. Evidence e Pattern Classification, Chapter 2 (Part 3)

  26. Ex. 4. Belief Network for Fish P(a) a1=winter, 0.25a2=spring, 0.25 a3=summer,0.25 a4=autumn, 0.25 P(b) A B b1=north Atlantic, 0.6b2=south Atlantic, 0.4 P(x|a,b) X x1=salmonx2=sea bass P(d|x) C D P(c|x) d1=wide, d2=thinx1 0.3, 0.7x2 0.6, 0.4 c1=light,c2=medium, c3=darkx1 0.6, 0.2, 0.2x2 0.2, 0.3, 0.5 Pattern Classification, Chapter 2 (Part 3)

  27. Belief Network for Fish • Fish was caught in the summer in the north Atlantic and is a see bass that is dark and thin • P(a3,b1,x2,c3,d2)= P(a3)P(b1)P(x2|a3,b1)P(c3|x2)P(d2|x2)=0.25*0.6*0.4*0.5*0.4=0.012 Pattern Classification, Chapter 2 (Part 3)

  28. Light, south Atlantic, fish? Pattern Classification, Chapter 2 (Part 3)

  29. Normalize Pattern Classification, Chapter 2 (Part 3)

  30. Conditionally Independent Pattern Classification, Chapter 2 (Part 3)

  31. Medical Application • Medical diagnosis • Uppermost nodes: biological agent • (virus or bacteria) • Intermediate nodes: diseases • (flu or emphysema) • Lowermost nodes: symptoms • (high temperature or coughing) • Finds the most likely disease or cause • By entering measured values Pattern Classification, Chapter 2 (Part 3)

  32. Exercise 50 (based on Ex. 4) • (a) • December 20, north Atlantic, thin • P(a1)=P(a4)=0.5, P(b1)=1, P(d2)=1 • Fish? Error rate? • (b) • Thin, medium lightness • Season? Probability? • (c) • Thin, medium lightness, north atlantic • Season?, probability? Pattern Classification, Chapter 2 (Part 3)

More Related