1 / 53

CS 188: Artificial Intelligence Spring 2007

CS 188: Artificial Intelligence Spring 2007. Lecture 16: Review 3/8/2007. Srini Narayanan – ICSI and UC Berkeley. Midterm Structure. 5 questions Search (HW1 and HW 2) CSP (Written 2) Games (HW 4) Logic (HW 3) Probability/BN (Today’s lecture) One page cheat sheet and calculator allowed.

candacee
Download Presentation

CS 188: Artificial Intelligence Spring 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 188: Artificial IntelligenceSpring 2007 Lecture 16: Review 3/8/2007 Srini Narayanan – ICSI and UC Berkeley

  2. Midterm Structure • 5 questions • Search (HW1 and HW 2) • CSP (Written 2) • Games (HW 4) • Logic (HW 3) • Probability/BN (Today’s lecture) • One page cheat sheet and calculator allowed. • Midterm weight: 15% of your total grade. • Today Review 1: Mostly Probability/BN • Sunday: Review 2: All topics review and Q/A

  3. Probabilities • What you are expected to know • Basics • Conditional and Joint Distributions • Bayes Rule • Converting from conditional to joint and vice-versa • Independence and conditional independence • Graphical Models/Bayes Nets • Building nets from descriptions of problems • Conditional Independence • Inference by Enumeration from the net • Approximate inference (Prior sampling, rejection sampling, likelihood weighting)

  4. Review: Useful Rules • Conditional Probability (definition) • Chain Rule • Bayes Rule

  5. Marginalization • Marginalization (or summing out) is projecting a joint distribution to a sub-distribution over subset of variables

  6. The Product Rule • Sometimes joint P(X,Y) is easy to get • Sometimes easier to get conditional P(X|Y) • Example: P(sun, dry)?

  7. Conditional Independence • Reminder: independence • X and Y are independent ( )iff or equivalently, • X and Y are conditionally independent given Z ( ) iff or equivalently, • (Conditional) independence is a property of a distribution

  8. Conditional Independence • For each statement about distributions over X, Y, and Z, if the statement is not always true, state a conditional independence assumption which makes it true. • P(x|y) = P(x, y) / p(y) • P(x, y) = P(x)P(y) • P(x, y, z) = P(x|z)P(y|z)P(z) • P(x, y, z) = P(x)P(y)P(z|x, y) • P(x, y) = Sumz (x, y, z)

  9. Conditional Independence • For each statement about distributions over X, Y , and Z, if the statement is not always true, state a conditional independence assumption which makes it true. • P(x|y) = P(x, y)/ P(y) • always true • P(x, y) = P(x)P(y) • true if x and y are independent • P(x, y, z) = P(x|z)P(y|z)P(z) • true if x and y and independent given z • P(x, y, z) = P(x)P(y)P(z|x, y) • true if x and y are independent • P(x, y) = Sumz (x, y, z) • always true

  10. Conditional and Joint Distributions • Suppose I want to determine the joint distribution • P (W,X,Y,Z). • Assume I know • P(X,Y,Z) and P(W |X, Y) • What assumptions do I need to make to compute P(W,X,Y,Z)?

  11. Conditional and Joint Distributions • Suppose I want to determine the joint distribution • P (W,X,Y,Z). • Assume I know • P(X,Y,Z) and P(W | X, Y) • What assumptions do I need to make to compute P(W,X,Y,Z)? • ANS: • P(X,Y,Z,W) = P(X,Y,Z| W) P(W) = P(W |X,Y,Z) P(X,Y,Z) • If I assume P(W | X,Y,Z) = P (W | X,Y) ie. W is independent of Z given both X and Y, I can compute the joint.

  12. Graphical Model Notation • Nodes: variables (with domains) • Can be assigned (observed) or unassigned (unobserved) • Arcs: interactions • Similar to CSP constraints • Indicate “direct influence” between variables • arrows from a to b means b “depends on” a. Often the arrows indicate causation

  13. Bayes’ Net Semantics • Let’s formalize the semantics of a Bayes’ net • A set of nodes, one per variable X • A directed, acyclic graph • A conditional distribution for each node • A distribution over X, for each combination of parents’ values • CPT: conditional probability table • Description of a noisy “causal” process A1 An X A Bayes net = Topology (graph) + Local Conditional Probabilities

  14. Probabilities in BNs • Bayes’ nets implicitly encode joint distributions • As a product of local conditional distributions • To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together: • Example: • This lets us reconstruct any entry of the full joint • Not every BN can represent every full joint • The topology enforces certain conditional independencies

  15. Example: Alarm Network .001 * .002 * .05 * .05 * .01 = 5x10-11

  16. Y X Z X Z Y X Y Z Analyzing Independence • Arc between nodes ==> (poss) dependence • What if there is no direct arc? • To answer this question in general, we only need to understand 3-node graphs with 2 arcs • Cast of characters: “Common Effect” “Causal Chain” “Common Cause”

  17. Example L Yes R B Yes D T Yes T’

  18. Example • Variables: • R: Raining • T: Traffic • D: Roof drips • S: I’m sad • Questions: R T D S Yes

  19. Question • Which nets guarantee each statement: • 1 • 2 A C A C A B C B B NET X NET Y NET Z

  20. Approximate Inference: Prior Sampling Cloudy Cloudy Sprinkler Sprinkler Rain Rain WetGrass WetGrass

  21. Cloudy C Sprinkler S Rain R WetGrass W Example • We’ll get a bunch of samples from the BN: c, s, r, w c, s, r, w c, s, r, w c, s, r, w c, s, r, w • If we want to know P(W) • We have counts <w:4, w:1> • Normalize to get P(W) = <w:0.8, w:0.2> • This will get closer to the true distribution with more samples • Can estimate anything else, too • What about P(C| r)? P(C| r, w)?

  22. Cloudy C Sprinkler S Rain R WetGrass W Rejection Sampling • Let’s say we want P(C| s) • Same thing: tally C outcomes, but ignore (reject) samples which don’t have S=s • This is rejection sampling • It is also consistent (correct in the limit) • c, s, r, w • c, s, r, w • c, s, r, w • c, s, r, w • c, s, r, w

  23. Likelihood Weighting • Problem with rejection sampling: • If evidence is unlikely, you reject a lot of samples • You don’t exploit your evidence as you sample • Consider P(B|a) • Idea: fix evidence variables and sample the rest • Problem: sample distribution not consistent! • Solution: weight by probability of evidence given parents Burglary Alarm Burglary Alarm

  24. Likelihood Sampling Cloudy Cloudy Sprinkler Sprinkler Rain Rain WetGrass WetGrass

  25. Design of BN • When designing a Bayes net, why do we not make every variable depend on as many other variables as possible?

  26. Design of a BN • You are considering founding a startup to make AI based robots to do household chores, and you want to reason about your future. There are three ways you can possibly get rich (R), either your company can go public via an IPO (I), it can be acquired (A), or you can win the lottery (L). Your company cannot go public if it gets acquired. Of course, in order for your company to either go public or get acquired, your robot has to actually work (W). You decide that if you do strike it rich then you will probably retire to Hawaii (H) to live the good life. • Draw a graphical model for the problem that reflects the causal structure as stated.

  27. Bayes Net for the Question

  28. Independence • Which of the following independence properties are true for your network? • A ind I • L ind I • L ind I|R • L ind W|H • W ind H|L • W ind H|R

  29. Independence • Which of the following independence properties are true for your network? • A ind I • L ind I True • L ind I|R • L ind W|H • W ind H|L • W ind H|R True

  30. Inference • Write out an expression for an entry • P(a, h, i, l, r,w), of the joint distribution encoded by your network, P(A,H, I, L,R,W) in terms of quantities provided by the network.

  31. Inference • Write out an expression for an entry • P(a, h, i, l, r,w), of the joint distribution encoded by your network, P(A,H, I, L,R,W) in terms of quantities provided by the network. • P(a, h, i, l, r,w) = • P(w)P(a|w)P(i|w, a)P(r|a, i, l)P(h|r)

  32. The three prisoners • Three prisoners A, B, and C have been tried for murder. • Their verdicts will be read and sentence executed tomorrow. • They know that only one of them will be declared guilty and hanged, the other two will be set free. • The identity of the guilty prisoner is not known to the prisoners, only to a prison guard. • In the middle of the night, prisoner A calls the guard over and makes the following request. • A to Guard: Please take this letter to one of my friends, the one who is to be released. You and I know that at least one of the others (B, C) will be freed. • The guard agrees. • An hour later, A calls the guard and asks “Can you tell me which person (B or C) you gave the letter to. This should give me no clue about my chances since either of them had an equal chance of receiving the letter.” • The guard answers “I gave the letter to B. B will be released tomorrow.” • A thinks “Before I talked to the guard, my chances of being executed were 1 in 3. Now that he has told me that B will be released, only C and I remain, so my chances are 1 in 2. What did I do wrong? I made certain not to ask for any information relevant to my own fate.. Question: What is A’s chance of perishing at dawn. 1 in 2 or 1 in 3. Why?

  33. Topic Review • Search • CSP • Games • Logic

  34. Search • Uninformed Search • DFS, BFS • Uniform Cost • Iterative Deepening • Informed Search • Best first greedy • A* • Admissibility • Consistency • Coming up with admissible heuristics • relaxed problem • Local Search

  35. CSP • Formulating problems as CSPs • Basic solution with DFS with backtracking • Heuristics (Min Remaining Value, LCV) • Forward Checking • Arc consistency for CSP

  36. Games • Problem formulation • Minimax and zero sum two player games • Alpha-Beta pruning

  37. Logic • Basics: Entailment, satisfiability, validity • Prop Logic • Truth tables, enumeration • converting propositional sentences to CNF • Propositional resolution • First Order Logic • Basics: Objects, relations, functions, quantifiers • Converting NL sentences into FOL

  38. Search Review • Uninformed Search • DFS, BFS • Uniform Cost • Iterative Deepening • Informed Search • Best first greedy • A* • Admissible • Consistency • Relaxed problem for heuristics • Local Search

  39. Combining UCS and Greedy • Uniform-costorders by path cost, or backward cost g(n) • Best-firstorders by goal proximity, or forward cost h(n) • A* Search orders by the sum: f(n) = g(n) + h(n) 5 e h=1 1 1 3 2 S a d G h=5 2 1 h=6 h=2 h=0 1 b c h=5 h=4 Example: Teg Grenager

  40. Admissible Heuristics • A heuristic is admissible(optimistic) if: where is the true cost to a nearest goal • E.g. Euclidean distance on a map problem • Coming up with admissible heuristics is most of what’s involved in using A* in practice.

  41. Trivial Heuristics, Dominance • Dominance: • Heuristics form a semi-lattice: • Max of admissible heuristics is admissible • Trivial heuristics • Bottom of lattice is the zero heuristic (what does this give us?) • Top of lattice is the exact heuristic

  42. Constraint Satisfaction Problems • Standard search problems: • State is a “black box”: any old data structure • Goal test: any function over states • Successors: any map from states to sets of states • Constraint satisfaction problems (CSPs): • State is defined by variables Xi with values from a domain D (sometimes D depends on i) • Goal test is a set of constraints specifying allowable combinations of values for subsets of variables • Simple example of a formal representation language • Allows useful general-purpose algorithms with more power than standard search algorithms

  43. Constraint Graphs • Binary CSP: each constraint relates (at most) two variables • Constraint graph: nodes are variables, arcs show constraints • General-purpose CSP algorithms use the graph structure to speed up search. E.g., Tasmania is an independent subproblem!

  44. Improving Backtracking • General-purpose ideas can give huge gains in speed: • Which variable should be assigned next? • In what order should its values be tried? • Can we detect inevitable failure early? • Can we take advantage of problem structure?

  45. Minimum Remaining Values • Minimum remaining values (MRV): • Choose the variable with the fewest legal values • Why min rather than max? • Called most constrained variable • “Fail-fast” ordering

  46. Degree Heuristic • Tie-breaker among MRV variables • Degree heuristic: • Choose the variable with the most constraints on remaining variables • Why most rather than fewest constraints?

  47. Least Constraining Value • Given a choice of variable: • Choose the least constraining value • The one that rules out the fewest values in the remaining variables • Note that it may take some computation to determine this! • Why least rather than most? • Combining these heuristics makes 1000 queens feasible

  48. NT Q WA SA NSW V Forward Checking • Idea: Keep track of remaining legal values for unassigned variables • Idea: Terminate when any variable has no legal values

  49. NT Q WA SA NSW V Constraint Propagation • Forward checking propagates information from assigned to unassigned variables, but doesn't provide early detection for all failures: • NT and SA cannot both be blue! • Why didn’t we detect this yet? • Constraint propagation repeatedly enforces constraints (locally)

  50. NT Q WA SA NSW V Arc Consistency • Simplest form of propagation makes each arc consistent • X  Y is consistent iff for every value x there is some allowed y • If X loses a value, neighbors of X need to be rechecked! • Arc consistency detects failure earlier than forward checking • What’s the downside of arc consistency? • Can be run as a preprocessor or after each assignment

More Related