1 / 41

17 th October

17 th October. --Project 2 can be submitted until Thursday (in-class) --Homework 3 due Thursday --Midterm next Thursday (10/26). Need to know this! If n evidence variables, We will need 2 n probabilities!. What happens if there are multiple symptoms…?. Conditional independence

brettwest
Download Presentation

17 th October

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 17th October --Project 2 can be submitted until Thursday (in-class) --Homework 3 due Thursday --Midterm next Thursday (10/26)

  2. Need to know this! If n evidence variables, We will need 2n probabilities! What happens if there are multiple symptoms…? Conditional independence To the rescue Suppose P(TA,Catch|cavity) = P(TA|Cavity)*P(Catch|Cavity) Patient walked in and complained of toothache You assess P(Cavity|Toothache) Now you try to probe the patients mouth with that steel thingie, and it catches… How do we update our belief in Cavity? P(Cavity|TA, Catch) = P(TA,Catch| Cavity) * P(Cavity) P(TA,Catch) = a P(TA,Catch|Cavity) * P(Cavity)

  3. Conditional Independence Assertions • We write X || Y | Z to say that the set of variables X is conditionally independent of the set of variables Y given evidence on the set of variables Z (where X,Y,Z are subsets of the set of all random variables in the domain model) • We saw that Bayes Rule computations can exploit conditional independence assertions. Specifically, • X || Y| Z implies • P(X & Y|Z) = P(X|Z) * P(Y|Z) • P(X|Y, Z) = P(X|Z) • P(Y|X,Z) = P(Y|Z) • Idea: Why not write down all conditional independence assertions that hold in a domain?

  4. Cond. Indep. Assertions (Contd) • Idea: Why not write down all conditional independence assertions (CIA) (X || Y | Z) that hold in a domain? • Problem: There can be exponentially many conditional independence assertions that hold in a domain (recall that X, Y and Z are all subsets of the domain variables. • Brilliant Idea: May be we should implicitly specify the CIA by writing down the “dependencies” between variables using a graphical model • A Bayes Network is a way of doing just this. • The Baye Net is a Directed Acyclic Graph whose nodes are random variables, and the immediate dependencies between variables are represented by directed arcs • The topology of a bayes network shows the inter-variable dependencies. Given the topology, there is a way of checking if any Cond. Indep. Assertion. holds in the network (the Bayes Ball algorithm and the D-Sep idea)

  5. Cond. Indep. Assertions (contd) • We said that a bayes net implicitly represents a bunch of CIA • Qn. If I tell you exactly which CIA hold in a domain, can you give me a bayes net that exactly models those and only those CIA? • Unfortunately, NO. (See the X,Y,Z,W blog example) • This is why there is anothertype of graphical models called “undirected graphical models” • In an undirected graphical model, also called a markov random field, nodes correspond to random variables, and the immediate dependencies between variables are represented by undirected edges. • The CIA modeled by an undirected graphical model are different • X || Y | Z in an undirected graph if every path from a node in X to a node in Y must pass through a node in Z (so if we remove the nodes in Z, then X and Y will be disconnected) • Undirected models are good to represent “soft constraints” between random variables (e.g. the correlation between different pixels in an image) while directed models are good for representing causal influences between variables

  6. CIA implicit in Bayes Nets • So, what conditional independence assumptions are implicit in Bayes nets? • Local Markov Assumption: • A node N is independent of its non-descendants (including ancestors) given its immediate parents. (So if P are the immediate paretnts of N, and A is the set of Ancestors, then {N} || A| P ) • (Equivalently) A node N is independent of all other nodes given its markov blanked (parents, children, children’s parents) • Given this assumption, many other conditional independencies follow. For a full answer, we need to appeal to D-Sep condition and/or Bayes Ball reachability

  7. Topological Semantics Independence from Every node holds Given markov blanket Independence from Non-descedants holds Given just the parents These two conditions are equivalent Many other coniditional indepdendence assertions follow from these

  8. Takes O(2n) for most natural queries of type P(D|Evidence) NEEDS O(2n) probabilities as input Probabilities are of type P(wk)—where wk is a world Directly using Joint Distribution Can take much less than O(2n) time for most natural queries of type P(D|Evidence) STILL NEEDS O(2n) probabilities as input Probabilities are of type P(X1..Xn|Y) Directly using Bayes rule Can take much less than O(2n) time for most natural queries of type P(D|Evidence) Can get by with anywhere between O(n) and O(2n) probabilities depending on the conditional independences that hold. Probabilities are of type P(X1..Xn|Y) Using Bayes rule With bayes nets

  9. Review

  10. Review

  11. Bayes Ball Alg: Shade the evidence nodes Put one ball each at the X nodes See if any if them can find their way to any of the Y nodes

  12. 10/19

  13. Happy Deepavali! 10/19 4th Nov, 2002.

  14. You have been given the topology of a bayes network, but haven't yet gotten the conditional probability tables     (to be concrete, you may think of the pearl alarm-earth quake scenario bayes net).     Your friend shows up and says he has the joint distribution all ready for you. You don't quite trust your    friend and think he is making these numbers up. Is there any way you can prove that your friends' joint     distribution is not correct? Answer: Check to see if the joint distribution given by your friend satisfies all the conditional independence assumptions. For example, in the Pearl network, Compute P(J|A,M,B) and P(J|A). These two numbers should come out the same! Notice that your friend could pass all the conditional indep assertions, and still be cheating re: the probabilities For example, he filled up the CPTs of the network with made up numbers (e.g. P(B)=0.9; P(E)=0.7 etc) and computed the joint probability by multiplying the CPTs. This will satisfy all the conditional indep assertions..! The main point to understand here is that the network topology does put restrictions on the joint distribution. Blog Questions

  15. Continuing bad friends, in the question above, suppose a second friend comes along and says that he can give you   the conditional probabilities that you want to complete the specification of your bayes net. You ask him a CPT entry,    and pat comes a response--some number between 0 and 1. This friend is well meaning, but you are worried that the   numbers he is giving may lead to some sort of inconsistent joint probability distribution. Is your worry justified ( i.e., can your   friend give you numbers that can lead to an inconsistency?)  (To understand "inconsistency", consider someone who insists on giving you P(A), P(B), P(A&B) as well as P(AVB)  and they wind up not satisfying the P(AVB)= P(A)+P(B) -P(A&B)[or alternately, they insist on giving you P(A|B), P(B|A), P(A) and P(B), and the four numbers dont satisfy the bayes rule] Answer: No—as long as we only ask the friend to fill up the CPTs in the bayes network, there is no way the numbers won’t makeup a consistent joint probability distribution This should be seen as a feature.. Also we had a digression about personal probabilities.. John may be an optimist and believe that P(burglary)=0.01 and Tom may be a pessimist and believe that P(burglary)=0.99 Bayesians consider both John and Tom to be fine (they don’t insist on an objective frequentist interpretation for probabilites) However, Bayesians do think that John and Tom should act consistently with their own beliefs For example, it makes no sense for John to go about installing tons of burglar alarms given his belief, just as it makes no sense for Tom to put all his valuables on his lawn Blog Questions (2)

  16. Your friend heard your claims that Bayes Nets can represent any possible conditional independence assertions exactly. He comes to you and says he has four random variables, X, Y, W and Z, and only TWO conditional independence assertions:X .ind. Y |  {W,Z}W .ind. Z  |  {X, Y}He dares you to give him a bayes network topology on these four nodes that exactly represents these and only these conditional independencies. Can you? (Note that you only need to look at 4 vertex directed graphs). Answer: No this is not possible. Here are two “wrong” answers Consider a disconnected graph where X, Y, W, Z are all unconnected. In this graph, the two CIAs hold. However, unfortunately so do many other CIAs Consider a graph where W and Z are both immediate parents of X and Y. In this case, clearly, X .ind. Y| {W,Z}. However, W and Z are definitely dependent given X and Y (Explainign away). Undirected models can capture these CIA exactly. Consider a graph X is connected to W and Z; and Y is connected to W and Z (sort of a diamond). In undirected models CIA is defined in terms of graph separability Since X and Y separate W and Z (i.e., every path between W and Z must pass through X and Y), W .ind. Z|{X,Y}. Similarly the other CIA Undirected graphs will be unable to model some scenarios that directed ones can; so you need both… Blog Questions (3)

  17. Blog Comments.. • arc said... • I'd just like to say that this project is awesome. And not solely because it doesn't involve LISP, either! “Can’t leave well enough alone” rejoinder

  18. CIA implicit in Bayes Nets • So, what conditional independence assumptions are implicit in Bayes nets? • Local Markov Assumption: • A node N is independent of its non-descendants (including ancestors) given its immediate parents. (So if P are the immediate paretnts of N, and A is the set of Ancestors, then {N} || A| P ) • (Equivalently) A node N is independent of all other nodes given its markov blanked (parents, children, children’s parents) • Given this assumption, many other conditional independencies follow. For a full answer, we need to appeal to D-Sep condition and/or Bayes Ball reachability

  19. Topological Semantics Independence from Every node holds Given markov blanket Independence from Non-descedants holds Given just the parents These two conditions are equivalent Many other coniditional indepdendence assertions follow from these

  20. Bayes Ball Alg: Shade the evidence nodes Put one ball each at the X nodes See if any if them can find their way to any of the Y nodes

  21. D-sep (direction dependent Separation)(for those who don’t like Bayes Balls) • X || Y | E if every undirected path from X to Y is blocked by E • A path is blocked if there is a node Z on the path s.t. • Z is in E and Z has one arrow coming in and another going out • Z is in E and Z has both arrows going out • Neither Z nor any of its descendants are in E and both path arrows lead to Z

  22. Alarm Burglary Earthquake P(A|J,M) =P(A)? How many probabilities are needed? 13 for the new; 10 for the old Is this the worst?

  23. Making the network Sparse by introducing intermediate variables • Consider a network of boolean variables where n parent nodes are connected to m children nodes (with each parent influencing each child). • You will need n+m*2n conditional probabilities • Suppose you realize that what is really influencing the child nodes is some single aggregate function on the parent’s values (e.g. sum of the parents). • We can introduce a single intermediate node called “sum” which has links from all the n parent nodes, and separately influences each of the m child nodes • You will wind up needing only n+2n+2m conditional probabilities to specify this new network!

  24. We only consider the failure to cause Prob of the causes that hold Prob that X holds even though ith parent doesn’t How about Noisy And? (hint: A&B => ~( ~A V ~B) ) k ri i=j+1

  25. Constructing Belief Networks: Summary • [[Decide on what sorts of queries you are interested in answering • This in turn dictates what factors to model in the network • Decide on a vocabulary of the variables and their domains for the problem • Introduce “Hidden” variables into the network as needed to make the network “sparse” • Decide on an order of introduction of variables into the network • Introducing variables in causal direction leads to fewer connections (sparse structure) AND easier to assess probabilities • Try to use canonical distributions to specify the CPTs • Noisy-OR • Parameterized discrete/continuous distributions • Such as Poisson, Normal (Gaussian) etc

More Related