1 / 13

CS498-EA Reasoning in AI Lecture #5

CS498-EA Reasoning in AI Lecture #5. Instructor: Eyal Amir Fall Semester 2009. Last Time. Propositional Logic Inference in different representations CNF: SAT hard; small representation sometimes DNF: SAT easy; large representation OBDDs: SAT easy; large representation sometimes

carney
Download Presentation

CS498-EA Reasoning in AI Lecture #5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS498-EAReasoning in AILecture #5 Instructor: Eyal Amir Fall Semester 2009

  2. Last Time • Propositional Logic • Inference in different representations • CNF: SAT hard; small representation sometimes • DNF: SAT easy; large representation • OBDDs: SAT easy; large representation sometimes • NNF: SAT hard; fewest large representations • Applications: • Circuit and program verification; computational biology

  3. Pop Quiz (5 min) • Prove or disprove: • Every CNF representation with n variables of propositional formulas takes O(2n) space to represent some propositional theories on n variables (hint: how many non-equivalent theories of n variables are there?) • Give me your answer; NO IMPACT on your final score in this class

  4. Today • Probabilistic graphical models • Treewidth methods: • Variable elimination • Clique tree algorithm • Applications du jour: Sensor Networks

  5. Probability • A sample space Omega (O) is a set of outcomes of a random experiment • A probability P is a function from a sigma-field (e.g., all measurable subsets) A on O (the events) to [0,1]. • A random variable X is a function X:OR such that for all B Borel set in R, X-1(B) is in A.

  6. Independent Random Variables • Two variables X and Y are independent if • P(X = x|Y = y) = P(X = x) for all values x,y • That is, learning the values of Y does not change prediction of X • If X and Y are independent then • P(X,Y) = P(X|Y)P(Y) = P(X)P(Y) • In general, if X1,…,Xp are independent, then P(X1,…,Xp)= P(X1)...P(Xp) • Requires O(n) parameters

  7. Conditional Independence • Unfortunately, most of random variables of interest are not independent of each other • A more suitable notion is that of conditional independence • Two variables X and Y are conditionally independent given Z if • P(X = x|Y = y,Z=z) = P(X = x|Z=z) for all values x,y,z • That is, learning the values of Y does not change prediction of X once we know the value of Z • notation: I ( X , Y | Z )

  8. Marge Homer Lisa Maggie Bart Example: Family trees Noisy stochastic process: Example: Pedigree • A node represents an individual’sgenotype • Modeling assumptions: • Ancestors can effect descendants' genotype only by passing genetic materials through intermediate generations

  9. Y1 Y2 X Non-descendent Markov Assumption Ancestor Parent • We now make this independence assumption more precise for directed acyclic graphs (DAGs) • Each random variable X, is independent of its non-descendents, given its parents Pa(X) • Formally,I (X, NonDesc(X) | Pa(X)) Non-descendent Descendent

  10. Burglary Earthquake Radio Alarm Call Markov Assumption Example • In this example: • I ( E, B ) • I ( B, {E, R} ) • I ( R, {A, B, C} | E ) • I ( A, R | B,E ) • I ( C, {B, E, R} | A)

  11. X Y X Y I-Maps • A DAG G is an I-Map of a distribution P if all Markov assumptions implied by G are satisfied by P (Assuming G and P both use the same set of random variables) Examples:

  12. X Y Factorization • Given that G is an I-Map of P, can we simplify the representation of P? • Example: • Since I(X,Y), we have that P(X|Y) = P(X) • Applying the chain ruleP(X,Y) = P(X|Y) P(Y) = P(X) P(Y) • Thus, we have a simpler representation of P(X,Y)

  13. THE END

More Related