1 / 12

Chapter 16

Chapter 16. March 25, 2004. Probability Theory: What an agent should believe based on the evidence Utility Theory: What the agent wants Decision Theory: Combines the above two theories to decide what the agent should do. 16.4 Multiattribute Utility Functions. X = X 1 , … X n

Download Presentation

Chapter 16

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 16 March 25, 2004

  2. Probability Theory: What an agent should believe based on the evidence • Utility Theory: What the agent wants • Decision Theory: Combines the above two theories to decide what the agent should do

  3. 16.4 Multiattribute Utility Functions • X = X1, … Xn • x = <x1, … xn> • By convention, higher values mean higher utilities • Strict Dominance, Figure 16.3 (no uncertainty) • Stochastic Dominance, Figure 16.4

  4. If A stochastically dominates B, then for any monotonically nondecreasing utility function U(X), the expected utility of A is at least as high as the expected utility of B. • U(x1, … xn) = f [ f1(x1), …, fn(xn) ]

  5. Definition: Two attributes X and Y are preferentially independent of a third attribute Z if the preference between outcomes <x, y, z> and <x’, y’, z> does not depend on z. • Definition: Mutual Preferential Independence (MPI)

  6. Theorem: If attributes X1, …, Xn are MPI then the agent’s preference behavior can be described as maximizing the function V(x1, …, xn) = ∑ vi (xi) where each vi is a value function referring only to the attribute Xi

  7. 16.5 Decision Networks • Augmented Bayesian Networks • Figure 16.5 • Components • chance nodes (ovals) represent random variables • decision nodes (rectangles) represent choices of decision maker • utility nodes (diamonds) represent utility func.

  8. Evaluation of Network • Set evidence variables for current state • For each possible value of decision node • Set decision node to value • Calculate posterior probability for parent nodes of utility node • Calculate resulting utility

  9. 16.6 The Value of Information • Typically, not everything is known. • Information Value Theory: Helps the agent decide what information to acquire • VPI: Value of Perfect Information

  10. Information has value to the extent that it is likely to cause a change of plan to the extent that the new plan will be significantly better than the old plan • EU( | E) =  is current best action • EU(Ej | E, Ej) = • VPIE(Ej) = Figure 16.7

  11. Theorem: j, E { VPIE(Ej) >= 0}, i.e. the value of information is non-negative • Theorem: VPIE(Ej, Ek) = VPIE(Ej) + VPIE,Ej(Ek) = VPIE(Ek) + VPIE,Ek(Ej), i.e. collecting evidence is order independent • Figure 16.8, myopic information gathering agent

  12. 16.7 Decision Theoretic Expert Systems • Create causal model • Simplify (Figure 16.9) • Assign probabilities • Assign utilities • Verify and refine model • Perform sensitivity analysis

More Related