1 / 11

Chapter 14 Decision Analysis

Chapter 14 Decision Analysis. Decision Making. Many decision making occur under condition of uncertainty Decision situations Probability cannot be assigned to future occurrence Probability cannot be assigned to future occurrence. Chapter Topics. Components of Decision Making

jethro
Download Presentation

Chapter 14 Decision Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 14 Decision Analysis

  2. Decision Making • Many decision making occur under condition of uncertainty • Decision situations • Probability cannot be assigned to future occurrence • Probability cannot be assigned to future occurrence

  3. Chapter Topics • Components of Decision Making • Decision themselves • State of nature: actual event that may occur in the future • Payoff: payoffs from different decisions given the various states of nature

  4. Decision making Tools • Decision Making without Probabilities • Decision-Making Criteria: maximax, maximin, minimax, Hurwicz, and equal likelihood • Decision Making with Probabilities • Expected Value • Expected opportunity loss • Expected value of perfect information (EVPI) • Decision Tree • Some Other Decision Analysis Tools

  5. Decision Making without Probabilities • Maximax: Selects the decision that will result in the maximum of maximum payoffs (optimistic criterion) • Example • Maximin: Selects the decision that will reflect the maximum of the minimum payoffs (pessimistic criterion) • Example • Hurwicz criterion: compromise between the maximax and maximin criterion • Multiplies the best payoff by  and the worst payoff by 1-  • , coefficient of optimism, is a measure of the decision maker’s optimism • Example • Equal Likelihood ( or Laplace): Multiplies the decision payoff for each state of nature by an equal weight

  6. Decision Making with Probabilities • Expected value: Computed by multiplying each decision outcome by the probability of its occurrence • Example • Expected opportunity loss: Expected value of the regret for each decision • Example • Expected value of perfect information (EVPI): Maximum amount a decision maker would pay for additional information • EVPI= (Expected value given perfect information) – (Expected value without perfect information) • EVPI=the expected opportunity loss (EOL) for the best alternative • Example

  7. Expected Opportunity loss (EOL) • Select the maximum payoff under each state of nature and then subtract all other payoffs under respective state of nature Good Condition Bad Condition 100,000-50,000=50000 30,000-30,000=0 100,000-100,000=$0 30,000-(-40,000)=70,000 100,000-30,000=70,000 30,000-10,000=20,000 • Represent the regret that the decision maker would experience if a decision were made that resulted in less than the maximum payoff • Assume DM is able to estimate a 0.6 that good will prevail and a 0.4 that poor will prevail • EOL(A)=50,000(.6)+(0)(0.4) =30,000 • EOL(B)=0(.6)+(70000)(0.4) =28000 best • EOL(C)=70,000(.6)+(20000)(0.4) =50,000

  8. Expected Value of Perfect Information • Possible to purchase additional information regarding the future • DM should not pay more than what he/she earns from his investment • Thus there is a maximum value for it • Computed as the expected value of perfect information (EVPI) • If the DM knows for sure that good condition will prevail, he goes after B (100,000) • If the DM knows for sure that poor condition will prevail, he goes after A (30,000)

  9. Expected Value of Perfect Information (EVPI) • Also the probabilities tell us about the likelihood of good or poor condition (0.6 and 0.4) • Means that each state of nature will occur only a certain portion of the time • Thus, each decision outcomes must be weighted; (100,000)(0.6)+(30,000)(0.4)=72,000 • 72,000 is the expected value of the decision, given perfect information, not the EVPI • EVPI is computed by subtracting the expected value (EV) without perfect information (44000) from the expected value given perfect information 72,000 • EVPI=72,000-44,000=28,000 • Maximum amount that DM would pay for additional information, but usually pays less

  10. Decision Making with Probabilities-Cont. • Decision Trees:A diagram consisting of decision nodes (squares), probability nodes (circles), and decision alternatives (branches) • Example • Sequential decision tree: Used to illustrate a situation requiring a series of decisions • Example • Bayesian analysis: uses additional information to change the marginal probability of an event • Uses conditional probability- probability that an event will occur given that another event has already occurred • Uses also posterior probability: altered marginal probability of an event based on additional information • Example

  11. Decision Analysis Example • Determine the best decision using the 5 criteria • Determine best decision with probabilities assuming .70 probability of good conditions, .30 of poor conditions. Use expected value and expected opportunity loss criteria. • Compute expected value of perfect information. • Develop a decision tree with expected value at the nodes • Given following, P(Pg) = .70, P(Ng) = .30, P(Pp) = 20, P(Np) = .80, determine posteria probabilities using Bayes’ rule • Perform a decision tree analysis using the posterior probability obtained in part e

More Related