1 / 42

Bayesian networks

Bayesian networks. Chapter 14 . Outline. Syntax Semantics. Bayesian networks. A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions Syntax: a set of nodes, one per variable

lynne
Download Presentation

Bayesian networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bayesian networks Chapter 14

  2. Outline • Syntax • Semantics

  3. Bayesian networks • A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions • Syntax: • a set of nodes, one per variable • a directed, acyclic graph (link ≈ "directly influences") • a conditional distribution for each node given its parents: P (Xi | Parents (Xi)) • In the simplest case, conditional distribution represented as a conditional probability table (CPT) giving the distribution over Xi for each combination of parent values

  4. Example • Topology of network encodes conditional independence assertions: • Weather is independent of the other variables • Toothache and Catch are conditionally independent given Cavity

  5. Example • I'm at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn't call. Sometimes it's set off by minor earthquakes. Is there a burglar? • Variables: Burglary, Earthquake, Alarm, JohnCalls, MaryCalls • Network topology reflects "causal" knowledge: • A burglar can set the alarm off • An earthquake can set the alarm off • The alarm can cause Mary to call • The alarm can cause John to call

  6. Example contd.

  7. Compactness • A CPT for Boolean Xi with k Boolean parents has 2k rows for the combinations of parent values • Each row requires one number p for Xi = true(the number for Xi = false is just 1-p) • If each variable has no more than k parents, the complete network requires O(n · 2k) numbers • I.e., grows linearly with n, vs. O(2n)for the full joint distribution • For burglary net, 1 + 1 + 4 + 2 + 2 = 10 numbers (vs. 25-1 = 31)

  8. Semantics The full joint distribution is defined as the product of the local conditional distributions: P (X1, … ,Xn) = πi = 1P (Xi | Parents(Xi)) e.g., P(j  m  a b e) = P (j | a) P (m | a) P (a | b, e) P (b) P (e) n

  9. Constructing Bayesian networks • 1. Choose an ordering of variables X1, … ,Xn • 2. For i = 1 to n • add Xi to the network • select parents from X1, … ,Xi-1 such that P (Xi | Parents(Xi)) = P (Xi | X1, ... Xi-1) This choice of parents guarantees: P (X1, … ,Xn) = πi =1P (Xi | X1, … , Xi-1) (chain rule) = πi =1P (Xi | Parents(Xi)) (by construction) n n

  10. Example • Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)?

  11. Example • Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J) No P(A | J, M) = P(A | J)?P(A | J, M) = P(A)?

  12. Example • Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J) No P(A | J, M) = P(A | J)?P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? P(B | A, J, M) = P(B)?

  13. Example • Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J) No P(A | J, M) = P(A | J)?P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? Yes P(B | A, J, M) = P(B)? No P(E | B, A ,J, M) = P(E | A)? P(E | B, A, J, M) = P(E | A, B)?

  14. Example • Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J) No P(A | J, M) = P(A | J)?P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? Yes P(B | A, J, M) = P(B)? No P(E | B, A ,J, M) = P(E | A)? No P(E | B, A, J, M) = P(E | A, B)? Yes

  15. Example contd. • Deciding conditional independence is hard in noncausal directions • (Causal models and conditional independence seem hardwired for humans!) • Network is less compact: 1 + 2 + 4 + 2 + 4 = 13 numbers needed

  16. Inference • Complexity of inference Bayes Net • Bayes Net reduces space complexity • Bayes Net does not reduce time complexity for general case

  17. Conditional independence and D-separation • Two sets of nodes, X and Y, are conditionally independent given an evidence set of nodes, E if every undirected path from a node in X to a node in Y is d-seperated by E. • A set of nodes, E d-separates to sets of nodes, X and Y, if every undirected path from a node in X to a node in Y is blocked by E • A path is blocked given E if there is a node Z on the path for which one of the following holds:

  18. Conditional independence and D-separation - example

  19. Inference • Inference by enumeration P(B | j, m) = P(B, j, m) / P(j, m) =  P(B, j, m) = a eP(B, e, a, j, m) = P(B)e P(e)aP(a | B,e) P(j | a) P(m | a)

  20. Inference for polytrees

  21. Time complexity • Dynamic programming • Works well for polytrees (where there is at most one path between any two nodes) • Doesn’t work for multiply connected networks

  22. Inference in multiply connected networks • Clustering • Cutset conditioning • Stochastic methods • Monte Carlo • likelihood weighting • Markov Chain Monte Carlo • Will be skipped in this course

  23. Clustering • Merge variables, replace their CPTs by their combined CPT

  24. Cutset conditioning • cutting off cycles to obtain polytree. • Sum over all instantiations of cutset variables.

  25. Cutset conditioning - example P(W|S) =RCP(W|S,R,C) P(R,C|S) = RCP(W|S,R) P(R,C|S) P(R,C|S) = P(R,S|C) P(C) / P(S) = P(R|C) P(S|C) P(C) / P(S) (Notice here that node C d-separates R and S) P(C = +) + P(C = -)

  26. Stochastic Inference • It’s expensive to work with the full joint distribution… whether as a table or as a Bayes Network • Is approximation good enough? • Monte Carlo

  27. Monte Carlo • Use samples to approximate solution • Simulated annealing used Monte Carlo theories to justify why random guesses and sometimes going uphill can lead to optimality • More samples = better approximation • How many are needed? • Where should you take the samples?

  28. Prior sampling • An ability to model the prior probabilities of a set of random variables

  29. Approximating true distribution • With enough samples, perfect modeling is possible

  30. Rejection sampling • Compute P(X | e) • Use PriorSample (SPS) and create N samples • Denote Ne number of samples consistent with e (namely E = e) • Denote Nex number of samples consistent with E = e) AND X = x • P(X | e) can be computed from Nex / Ne

  31. Example • P(Rain | Sprinkler = true) • Use Bayes Net to generate 100 samples • Suppose 73 have Sprinkler=false • Suppose 27 have Sprinkler=true • 8 have Rain=true • 19 have Rain=false • P(Rain | Sprinkler=true) = <8/27; 19/27>

  32. Problems with rejection sampling • Standard deviation of the error in probability is proportional to 1/sqrt(n), where n is the number of samples consistent with evidence • As problems become complex, number of samples consistent with evidence becomes small and it becomes harder to construct accurate estimates

  33. Likelihood weighting • We only want to generate samples that are consistent with the evidence, e • We’ll sample the Bayes Net, but we won’t let every random variable be sampled, some will be forced to produce a specific output

  34. Example • P (Rain | Sprinkler=true, WetGrass=true)

  35. Example • P (Rain | Sprinkler=true, WetGrass=true) • First, weight vector, w, set to 1.0

  36. Example • Notice that weight is reduced according to how likely an evidence variable’s output is given its parents • So final probability is a function of what comes from sampling the free variables while constraining the evidence variables

  37. Comparing techniques • In likelihood weighting, attention is paid to evidence variables before samples are collected • In rejection sampling, evidence variables are considered after the sampling • Likelihood weighting isn’t as accurate as the true posterior distribution, P(z | e) because the sampled variables ignore evidence among z’s non-ancestors

  38. Likelihood weighting • Uses all the samples • As evidence variables increase, it becomes harder to keep the weighting constant high and estimate quality drops

  39. Gibbs sampling • Start with an arbitrary state of the network, with Evidence variables set to their observed values • Randomly sample a value for a non evidence variable, conditioned on its: parents, children and childrens parents (Markov blanket) • Each state sampled contributes equally to the estimate of the query

  40. Some Applications of BN • Medical diagnosis, e.g., lymph-node diseases • Troubleshooting of hardware/software systems • Fraud/uncollectible debt detection • Data mining • Analysis of genetic sequences • Data interpretation, computer vision, image understanding

  41. Identification of buildings in aerial photographs

  42. Evaluation of insurance risks

More Related