1 / 35

# Bayesian Classifiers - PowerPoint PPT Presentation

Bayesian Classifiers. Muhammad Ali Yousuf ITM (Based on : Notes by David Squire, Monash University). Contents. What are Bayesian Classifiers? Bayes Theorem Naïve Bayesian Classification Bayesian Belief Networks Training Bayesian Belief Networks Why use Bayesian Classifiers?

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about ' Bayesian Classifiers' - fionn

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Bayesian Classifiers

ITM

(Based on: Notes by David Squire, Monash University)

• What are Bayesian Classifiers?

• Bayes Theorem

• Naïve Bayesian Classification

• Bayesian Belief Networks

• Training Bayesian Belief Networks

• Why use Bayesian Classifiers?

• Example Software: Netica

• Bayesian Classifiers are statistical classifiers

• based on Bayes Theorem (see following slides)

• They can predict the probability that a particular sample is a member of a particular class

• Perhaps the simplest Bayesian Classifier is known as the Naïve Bayesian Classifier

• based on a (usually incorrect) independence assumption

• performance is still often comparable to Decision Trees and Neural Network classifiers

P(AB)

P(A)

P(B)

Bayes Theorem

• Consider the Venn diagram at right. The area of the rectangle is 1, and the area of each region gives the probability of the event(s) associated with that region

• P(A|B) means “the probability of observing event A given that event B has already been observed”, i.e.

• how much of the time that we see B do we also see A? (i.e. the ratio of the purple region to the magenta region)º

P(A|B) = P(AB)/P(B), and alsoP(B|A) = P(AB)/P(A), therefore

P(A|B) = P(B|A)P(A)/P(B) (Bayes formula for two events)

More formally,

• Let X be the sample data

• Let H be a hypothesis that X belongs to class C

• In classification problems we wish to determine the probability that H holds given the observed sample data X

• i.e. we seek P(H|X), which is known as the posterior probability of H conditioned on X

• e.g. The probability that X is a Kangaroo given that X jumps and is nocturnal

• P(H) is the prior probability

• i.e. the probability that any given sample data is a kangaroo regardless of it method of locomotion or night time behaviour - i.e. before we know anything about X

• Similarly, P(X|H) is the posterior probability of X conditioned on H

• i.e the probability that X is a jumper and is nocturnal given that we know X is a kangaroo

• Bayes Theorem (from earlier slide) is then

• Assumes that the effect of an attribute value on a given class is independent of the values of other attributes. This assumption is known as class conditional independence

• This makes the calculations involved easier, but makes a simplistic assumption - hence the term “naïve”

• Can you think of an real-life example where the class conditional independence assumption would break down?

• Consider each data instance to be ann-dimensional vector of attribute values (i.e. features):

• Given m classes C1,C2, …,Cm, a data instance X is assigned to the class for which it has the greatest posterior probability, conditioned on X,i.e. X is assigned to Ci if and only if

• According to Bayes Theorem:

• Since P(X) is constant for all classes, only the numerator P(X|Ci)P(Ci) needs to be maximized

• If the class probabilities P(Ci) are not known, they can be assumed to be equal, so that we need only maximize P(X|Ci)

• Alternately (and preferably) we can estimate the P(Ci) from the proportions in some training sample

• It is can be very expensive to compute the P(X|Ci)

• if each component xk can have one of c values, there are cn possible values of X to consider

• Consequently, the (naïve) assumption of class conditional independence is often made, giving

• The P(x1|Ci),…, P(xn|Ci)can be estimated from a training sample(using the proportions if the variable is categorical; using a normal distribution and the calculated mean and standard deviation of each class if it is continuous)

• Fully computed Bayesian classifiers are provably optimal

• i.e.under the assumptions given, no other classifier can give better performance

• In practice, assumptions are made to simplify calculations, so optimal performance is not achieved

• Sub-optimal performance is due to inaccuracies in the assumptions made

• Nevertheless, the performance of the Naïve Bayes Classifier is often comparable to that decision trees and neural networks [p. 299, HaK2000], and has been shown to be optimal under conditions somewhat broader than class conditional independence [DoP1996]

• Problem with the naïve Bayesian classifier: dependencies do exist between attributes

• Bayesian Belief Networks (BBNs) allow for the specification of the joint conditional probability distributions: the class conditional dependencies can be defined between subsets of attributes

• i.e. we can make use of prior knowledge

• A BBN consists of two components. The first is a directed acyclic graph where

• each node represents an variable; variables may correspond to actual data attributes or to “hidden variables”

• each arc represents a probabilistic dependence

• each variable is conditionally independent of its non-descendents, given its parents

FamilyHistory

Smoker

• A simple BBN (from [HaK2000]). Nodes have binary values. Arcs allow a representation of causal knowledge

LungCancer

Emphysema

PositiveXRay

Dyspnea

• The second component of a BBN is a conditional probability table (CPT) for each variable Z, which gives the conditional distribution P(Z|Parents(Z))

• i.e. the conditional probability of each value of Z for each possible combination of values of its parents

• e.g. for for node LungCancer we may have

• P(LungCancer = “True” | FamilyHistory = “True” Smoker = “True”) = 0.8

• P(LungCancer = “False” | FamilyHistory = “False” Smoker = “False”) = 0.9…

• The joint probability of any tuple (z1,…, zn) corresponding to variables Z1,…,Zn is

• A node with in the BBN can be selected as an output node

• output nodes represent class label attributes

• there may be more than one output node

• The classification process, rather than returning a single class label (i.e. as a decision tree does) can return a probability distribution for the class labels

• i.e. an estimate of the probability that the data instance belongs to each class

• A Machine learning algorithm is needed to find the CPTs, and possibly the network structure

• If the network structure is known and all the variables are observable then training the network simply requires the calculation of Conditional Probability Table (as in naïve Bayesian classification)

• When the network structure is given but some of the variables are hidden (variables believed to influence but not observable) a gradient descent method can be used to train the BBN based on the training data. The aim is to learn the values of the CPT entries

• Let S be a set of s training examples X1,…,Xs

• Let wijk be a CPT entry for the variable Yi = yij having parents Ui = uik

• e.g. from our example, Yi may be LungCancer, yij its value “True”, Ui lists the parents of Yi, e.g. {FamilyHistory, Smoker}, and uik lists the values of the parent nodes, e.g. {“True”, “True”}

• The wijk are analogous to weights in a neural network, and can be optimized using gradient descent (the same learning technique as backpropagation is based on).

• An important advance in the training of BBNs was the development of Markov Chain Monte Carlo methods [Nea1993]

• Algorithms also exist for learning the network structure from the training data given observable variables (this is a discrete optimization problem)

• In this sense they are an unsupervised technique for discovery of knowledge

• A tutorial on Bayesian AI, including Bayesian networks, is available at http://www.csse.monash.edu.au/~korb/bai/bai.html

• No classification method has been found to be superior over all others in every case (i.e. a data set drawn from a particular domain of interest)

• indeed it can be shown that no such classifier can exist (known as “No Free Lunch” theorem)

• Methods can be compared based on:

• accuracy

• interpretability of the results

• robustness of the method with different datasets

• training time

• scalability

• e.g. neural networks are more computationally intensive than decision trees

• BBNs offer advantages based upon a number of these criteria (all of them in certain domains)

• Netica is an Application for Belief Networks and Influence Diagrams from Norsys Software Corp. Canada

• http://www.norsys.com/

• Can build, learn, modify, transform and store networks and find optimal solutions using an inference engine