1 / 12

Introduction to Bayesian Learning: Basic Terms and Notations

Learn the key terms and notations in Bayesian Learning, such as prior probability, conditional probability, posterior probability, and Bayes theorem. Understand how these concepts are used in machine learning problems.

kgautier
Download Presentation

Introduction to Bayesian Learning: Basic Terms and Notations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Co2 Session 1

  2. Bayesian Learning

  3. As Bayes Learning is based on Probability , we have to learn the basic terms and notationslike • Prior probability • Conditional probability • Posterior probability • Bayes theorem etc.

  4. Prior probability Theprior probability of an event is its probability calculated from some prior information about the event. • ex: P(A) is 40 %.

  5. Prior information: • No of customers =100 • 20% of customers buys computer. • 50% customers are age is >35. So P(buys computer)=0.2 P(age=35)=0.5

  6. Conditional probability • Conditional probabilities expresses the probability that Event-A will occur when you assume (or know) that Event-B has already occurred. • notation: P(A|B)

  7. Posterior probability:(Inference): • is a Conditional probability which represents the updated prior probability after taking into account some new piece of information.-P(A|B)

  8. P(h) to denote the initial probability that hypothesis h holds, before we have observed the training data. • P(h) is often called the prior probability of h. • P(D) to denote the prior probability that training data D .The probability of D given no knowledge about which hypothesis holds • P(D|h) to denote the probability of observing data D given some world in which hypothesis h holds.

  9. But In machine learning problems we are interested in the probability P (h| D) that h holds given the observed training data D. • P (h| D) is called the posterior probability of h, as it reflects our confidence that h holds after we have seen the training data D. • The posterior probability P(h|D) reflects the influence of the training data D, in contrast to the prior probability P(h) , which is independent of D.

More Related