1 / 20

Naïve Bayes Classification

Naïve Bayes Classification. Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 14, 2014. Bayes’ Theorem. Thomas Bayes (1701-1761) Simple form of Bayes’ Theorem, for two random variables C and X. Class prior probability. Likelihood.

happy
Download Presentation

Naïve Bayes Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Naïve Bayes Classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 14, 2014

  2. Bayes’ Theorem • Thomas Bayes (1701-1761) • Simple form of Bayes’ Theorem, for two random variables C and X Class prior probability Likelihood Predictor prior probability or evidence Posterior probability C X

  3. Probability Model • Probability model: for a target class variable C which is dependent over features X1,…, Xn The values of the features are given • So the denominator is effectively constant • Goal: calculating probabilities for the possible values of C • We are interested in the numerator:

  4. Probability Model • The conditional probability is equivalent to the joint probability • Applying the chain rule for join probability … …

  5. Strong Independence Assumption (Naïve) • Assume the features X1, … , Xnare conditionally independent given C • Given C, occurrence of Xi does not influence the occurrence of Xj, for i ≠ j. • Similarly, • Hence:

  6. Naïve Bayes Probability Model Class posterior probability Known values: constant

  7. Classifier based on Naïve Bayes • Decision rule: pick the hypothesis (value c of C) that has highest probability • Maximum A-Posteriori (MAP) decision rule Approximated from frequency in the training set Approximated from relative frequencies in the training set The values of features are known for the new observation

  8. Example of Naïve Bayes Reference: The IR Book by Raghavan et al, Chapter 6 Text Classification with Naïve Bayes

  9. The Text Classification Problem • Set of class labels / tags / categories: C • Training set: set D of documents with labels <d,c> ∈ D × C • Example: a document, and a class label <Beijing joins the World Trade Organization, China> • Set of all terms: V • Given d∈D’, a set of new documents, the goal is to find a class of the document c(d)

  10. Multinomial Naïve Bayes Model • Probability of a document d being in class c where P(tk|c) = probability of a term tkoccurring in a document of class c Intuitively: • P(tk|c) ~ how much evidence tkcontributes to the class c • P(c) ~ prior probability of a document being labeled by class c

  11. Multinomial Naïve Bayes Model • The expression has many probabilities. • May result a floating point underflow. • Add logarithms of the probabilities instead of multiplying the original probability values Term weight of tkin class c Todo: estimating these probabilities

  12. Maximum Likelihood Estimate • Based on relative frequencies • Class prior probability #of documents labeled as class c in training data total #of documents in training data • Estimate P(t|c) as the relative frequency of t occurring in documents labeled as class c total #of occurrences of t in documents d∈ c total #of occurrences of all terms in documents d∈ c

  13. Handling Rare Events • What if: a term t did not occur in documents belonging to class c in the training data? • Quite common. Terms are sparse in documents. • Problem: P(t|c) becomes zero, the whole expression becomes zero • Use add-one or Laplace smoothing

  14. Example India India India UK Indian Delhi Indian Indian Indian TajMahal Indian Goa London Indian Embassy Indian Embassy London Indian Indian Classify

  15. Bernoulli Naïve Bayes Model • Binary indicator of occurrence instead of frequency • Estimate P(t|c) as the fraction of documents in c containing the term t • Models absence of terms explicitly: Xi = 1 if ti is present 0 otherwise Absence of terms Difference between Multinomial with frequencies truncated to 1, and Bernoulli Naïve Bayes?

  16. Example India India India UK Indian Delhi Indian Indian Indian TajMahal Indian Goa London Indian Embassy Indian Embassy London Indian Indian Classify

  17. Naïve Bayes as a Generative Model • The probability model: Multinomial model UK Terms as they occur in d, exclude other terms X1= London X2= India • X3= • Embassy where Xiis the random variable for position iin the document • Takes values as terms of the vocabulary • Positional independence assumption  Bag of words model:

  18. Naïve Bayes as a Generative Model • The probability model: Bernoulli model UK Ulondon=1 UEmbassy=1 All terms in the vocabulary Udelhi=0 UTajMahal=0 UIndia=1 Ugoa=0 P(Ui=1|c) is the probability that term tiwill occur in any position in a document of class c

  19. Multinomial vs Bernoulli

  20. On Naïve Bayes • Text classification • Spam filtering (email classification) [Sahami et al. 1998] • Adapted by many commercial spam filters • SpamAssassin, SpamBayes, CRM114, … • Simple: the conditional independence assumption is very strong (naïve) • Naïve Bayes may not estimate right in many cases, but ends up classifying correctly quite often • It is difficult to understand the dependencies between features in real problems

More Related