1 / 20

Naïve Bayes based Model

Naïve Bayes based Model . Billy Doran 09130985. “If the model does what people do, do people do what the model does?”. Bayesian Learning. Determines the probability of a hypothesis H given a set of data D: Ρ(Η|D) = P(D|H) P(H)⁄P(D). Ρ(Η|D) = P(D|H) P(H)⁄P(D).

pules
Download Presentation

Naïve Bayes based Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Naïve Bayes based Model Billy Doran 09130985

  2. “If the model does what people do, do people do what the model does?”

  3. Bayesian Learning • Determines the probability of a hypothesis H given a set of data D: Ρ(Η|D) = P(D|H) P(H)⁄P(D)

  4. Ρ(Η|D) = P(D|H) P(H)⁄P(D) • P(H) is the prior probability of H. The probability of observing H for the whole data set • P(H|D) is the posterior probability of H. This means that given the Data D what is the probability of the hypothesis H. • P(D) is the prior probability of observing D. It is constant throughout the data set and can be ignored. • P(D|H) is the likelihood of observing the data given the hypothesis. Does the hypothesis reproduce the data?

  5. Maximum a Posteriori Probability • In order to classify an example as belonging to one category or another we aim to find the maximal value of P(H|D) • For example we can take the training pattern <A X C>, if we want to find the probability that this example belongs to category A the posterior probability is: P(Category A|A,X,C)

  6. Naïve Bayes • The Naïve Bayes algorithm allows us to assume conditional independence of the dimensions. • This means that we consider each dimension in terms of its probability given the category: P(A,B|Cat A) = P(A|Cat A)P(B|Cat A) • Using this information we are able to build a table of the Conditional Probabilities for each dimension

  7. Conditional Probability Table P(Dimension1=A|Category A) is 4/6, which is 0.6666

  8. Calculation • In order to get the scores for the pattern <A B C> we first find • P(A|A,B,C)=P(A|A)P(B|A)P(C|A)P(A) • =0.666*0.1666*0.1666*0.375=0.00688 • P(B|A,B,C)=0.166*0.5*0.1*0.375=0.0031125 • P(C|A,B,C)=0.1*0.1*0.833*0.375=0.00312375 • Next we normalise the score to get a value in the range [0-1] • A=0.00688/(0.0068+0.0031125+0.00312375) = 0.52

  9. Conjunctions • In order to calculate the conjunction of categories we find the joint probability of the two categories P(A&B) = P(A)P(B) • This is similar to the Prototype Theory for conjunctions.

  10. Training Data

  11. Training Data • The model is almost perfectly consistent learner, meaning that it reproduces the original training data with 100% accuracy. • For the conjunction examples #5 and #6 it classifies them as B and A respectively. They obtain a significantly higher score in the AB conjunction than in the AC or BC conjunctions. • This seems to suggest that these two examples are more representative of one member of the conjunction than the other.

  12. Test Data

  13. Graphs: Comparing Experimental results to Model results

  14. Test Data • The results are generally consistent with the experimental data. • Except for #3 and #4: • For #3 the experiment predicts AC>AB>BC, while the model generates AC>BC>AB • For #4 the experimental data predicts C>B>A, the model gives B>C>A

  15. Statistical Analysis • The average for the correlation between the model and experimental data was R=0.88 • At alpha =0.05 and df = n-2, this was significant. • #1 0.82, #2 0.93, #3 0.85, #4 0.84, #5 0.88, #6 0.92

  16. Unusual Predictions • How would the model handle <A B C>? • Output: A > B > C, AC > AB > BC • Is it possible to ask the model about triple conjunction? • Example: <X X B> • Model predicts: C>B=A, AB=AC>ABC>BC

  17. Conclusion • Naïve Bayes produces a good hypothesis of how people learn category classification. • The use of probabilities matches well with the underlying logic of the correlations between the dimensions and categories. • Creating a Causal Network might be an informative way to investigate further the interactions between the individual dimensions.

  18. Limitations • As the model uses a version of prototype to calculate its conjunction it is not able to capture overextension. To rectify this The formulae: Can be used to approximate overextension, where C is the category and KC is the set of non-C categories.

  19. Limitations • The model, also, does not take into account negative evidence. While it captures the general trend of the categories it does not, for example, represent the strength of negativity for Category C in test pattern #5 • This pattern is very similar to the conjunction patterns given in the training data. The strong negative reaction seems to be caused by the association between these conjunctions and categories A and B.

  20. The End

More Related