1 / 47

BAMS 517 Decision Analysis – II

BAMS 517 Decision Analysis – II. Martin L. Puterman UBC Sauder School of Business Winter Term 2011. Complex decision trees. Complex decisions.

mindy
Download Presentation

BAMS 517 Decision Analysis – II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BAMS 517Decision Analysis – II Martin L. Puterman UBC Sauder School of Business Winter Term 2011

  2. Complex decision trees

  3. Complex decisions • So far, the decisions we have studied have been quite simple, usually consisting of a single decision with two choices and a single uncertain event with two outcomes • We’ll now show how to solve more complex decision problems, by repeatedly using the principles we have developed for simple problems • The basic tool we will use to structure complex problems is a decision tree • The major new principle used in solving complex decisions is the idea of backward induction

  4. Example: Hatton Realty • Imagine you are a real estate agent in Vancouver. One day, a new client comes to you and says: • “I currently own several properties in the West Side of Vancouver that I would like to sell. I would like to list three of my properties in Dunbar, Kitsilano, and Point Grey through your agency, but on the following conditions: • You sell the Dunbar property for $20,000, the Kitsilano property for $40,000, and the Point Grey property for $80,000 • You will receive a 5% commission on each sale • You must sell the Dunbar property first, and within a month • If you sell the Dunbar property, then you may either sell the Kitsilano or Point Grey property next; otherwise, you will lose the right to sell these properties • If you sell this second property within a month, then you may list the third property; otherwise, you lose the right to sell the third property” • In considering this proposition, you assess the promotional of listing these properties, as well as the probability of selling them to be: PropertyListing CostProb. of Sale Dunbar $800 .6 Kitsilano $200 .7 Point Grey $400 .5

  5. Structuring complex decision problems • In the Hatton Realty problem, several events and decisions are made in sequence • The first step in analyzing a more complex decision such as this is to map the chronology of decisions and events in a timeline. Trace every possible progression of decisions and events that may occur • What is the first decision that needs to be made? • For each action that you might take at this point, what uncertain events that may impact future decisions or outcomes follow? What are their probabilities of occurrence? • For every possible event realization, what subsequent decisions or events can take place? Etc. • It is usually helpful to think of there being a sequence of decision points, between each of which may occur one or more uncertain events • Once you come to the end of each possible chain of decisions and events, you need to enter the total value of reaching that point.

  6. Decision trees • A useful way to represent complex decisions is through a decision tree • Every decision point is represented by a square node in the tree. Every “branch” leading out of such a node represents a possible decision • Each uncertain event is represented by a round node, and every “branch” leading out of such a node represents a possible realization of the event. These branches are labeled with the probabilities of each of these realizations • The “root” of the tree corresponds to the first decision that must be taken • The “leaves” of the tree represent final outcomes

  7. The Hatton Realty Decision Problem -$800 Sell $5,600 Don’t Sell Dunbar Accept Pt Grey .5 .4 .5 Sell Kits Don't Sell $1,600 .7 Refuse Pt Grey List Dunbar $2,000 .3 List Kits Don't Sell .6 Sell Dunbar $0 Sell $5,600 List Pt. Grey Accept Kits .7 Sell PG .3 .5 Don't Sell $3,600 Refuse Kits .5 Refuse to List Dunbar $3,800 Don't Sell Refuse next property -$200 $200 $0

  8. Backward induction • Once we have structured the decision as a decision problem, how do we determine the correct actions to take at each decision node? • We use the following node replacement method to reduce the size and complexity of the tree: • Replace any terminal event node with the leaf • Replace any terminal decision node with the leaf • Replacing these nodes, we work backwards from the end of the tree, computing expected utilities and decision values as we go, until we replace the entire tree with a single leaf • In replacing the decision nodes, we need to record which decisions achieve the maximum – these are the decisions we want to take if we ever reach this node p a 1-p b pa+(1-p)b a b max(a,b)

  9. Solving Hatton Realty -$800 Sell $5,600 Don’t Sell Dunbar Accept Pt Grey .5 .4 .5 Sell Kits Don't Sell $1,600 .7 Refuse Pt Grey List Dunbar $2,000 .3 List Kits Don't Sell .6 Sell Dunbar $0 Sell $5,600 List Pt. Grey Accept Kits .7 Sell PG .3 .5 Don't Sell $3,600 Refuse Kits .5 Refuse to List Dunbar $3,800 Don't Sell Refuse next property -$200 $200 $0

  10. Solving Hatton Realty -$800 Don’t Sell Dunbar Accept Pt Grey $3,600 .4 Sell Kits .7 Refuse Pt Grey List Dunbar $2,000 .3 List Kits Don't Sell .6 Sell Dunbar $0 List Pt. Grey Accept Kits $5,000 Sell PG .5 Refuse Kits .5 Refuse to List Dunbar $3,800 Don't Sell Refuse next property -$200 $200 $0

  11. Solving Hatton Realty -$800 Don’t Sell Dunbar .4 Sell Kits $3,600 .7 List Dunbar .3 List Kits Don't Sell .6 Sell Dunbar $0 List Pt. Grey Sell PG $5,000 .5 .5 Refuse to List Dunbar Don't Sell Refuse next property -$200 $200 $0

  12. Solving Hatton Realty -$800 Don’t Sell Dunbar .4 $2,500 List Dunbar List Kits .6 Sell Dunbar List Pt. Grey $2,400 Refuse to List Dunbar Refuse next property $200 $0

  13. Solving Hatton Realty -$800 Don’t Sell Dunbar .4 List Dunbar .6 Sell Dunbar $2,520 Refuse to List Dunbar $0

  14. Solving Hatton Realty $1,192 List Dunbar Refuse to List Dunbar $0

  15. Solving the realty problem • The value of the proposition is $1192 • You will choose the following policy • List Dunbar • If Dunbar Sells, List Kits; • If Kits sells list Pt Grey • Note that we used the expected monetary values as outcomes • In this problem, none of the probabilities assigned to each event depended on prior decisions • We assumed that the probabilities of each sale were independent of the order in which the properties went on the market • This will not always be the case – sometimes the probabilities of events will depend on previous decisions and previous events • The probability assigned to any event realization should be the probability of that event conditioned on all decisions and events that preceded it up to that point

  16. Payoff distribution under the optimal policy Mean Payoff ? Standard deviation of payoff? What is the probability distribution of the other strategy?

  17. The Newsvendor Problem • Items cost c, you sell them for p and if they can’t be sold you receive a scrap value s • For every unit sell you make a profit of G = p-c • For every unit you order and do not sell you incur a loss L=c-s • Demand D is unknown and either discrete with distribution P(D=d) or continuous with density f(d) • How many should you order to maximize expected profit? • See newsvendor.doc

  18. Value of Information

  19. Value of Information • Suppose in the Hatton Realty case you could do some market research before accepting the offer. • What would it be worth to know in advance whether or not the Dunbar property would sell? • Or how much would you pay a clairvoyant to get this information? • Let’s simplify the Hatton Realty decision tree to see how to take the availability of this information into account. • Assume for now the only option available is to list the Dunbar property by itself.

  20. Revised realty problem – Dunbar only -$800 Don’t Sell Dunbar .4 List Dunbar .6 Sell Dunbar $200 Refuse to List Dunbar $0

  21. Solving revised problem -$200 List Dunbar $0 Do not List Dunbar $0

  22. Revised realty problem – Dunbar only; but with perfect information $200 List Dunbar Dunbar will sell Don’t list Dunbar .6 $0 .4 $-800 List Dunbar Dunbarwon’t sell Don’t list Dunbar $0

  23. Revised realty problem – Dunbar only; but with perfect information $200 List Dunbar $200 Dunbar will sell Don’t list Dunbar $120 .6 $0 .4 -$800 List Dunbar Dunbarwon’t sell $0 Don’t list Dunbar $0

  24. Selling Dunbar and the Value of Perfect Information • If we knew in advance that Dunbar would sell we would list it and make $200 • If we knew in advance that Dunbar would not sell, we would not list it and earn $0. (saving $ 800). • Thus the expected value under this information would be $120. • If we didn’t know in the outcome of this chance event we would not list Dunbar and have an expected value of $0. • Thus knowing in advance whether or not Dunbar will sell is worth $120. • This is called the expected value of perfect information (EVPI). • In general the EVPI is the difference in expected values between the situation when the uncertain event is resolved and the expected value without this information (the base case). • Usually our base case is the no information case. • The EVPI is the most we would pay for any information about the event be it a survey, market research or …. • Exercise; What would is the EVPI for the event Point Grey Sells? Kitsilano Sells? They both sell? … • What is the EVPI for the newsvendor problem?

  25. Expected value of perfect information • The difference in the expected utility value between a decision made under uncertainty and the same decision made with the outcomes of the uncertainties known prior to the decision is the expected value of perfect information (EVPI) • It is not difficult to see that by removing the uncertainties from a decision problem, the value of that decision problem is increased, so that the EVPI is always positive: • Under normal conditions when the outcomes of the uncertain event are not known, the value of the decision is DU = maxijCijP(j) • When the uncertainties are removed prior to choosing di, the value of the decision becomes DPI = j [maxiCij] P(j) • Since the terms in the second sum always exceed the first sum, we can see that DPI≥DU

  26. Bayes’ Theorem

  27. The “Monty Hall” problem • Monty Hall was the host of the once-popular game show “Let’s Make a Deal” • In the show, contestants were shown three doors, behind each of which was a prize. The contestant chose a door and received the prize behind that door • This setup was behind one of the most notorious problems in probability • Suppose you are the contestant, and Monty tells you that there is a car behind one of the doors, and a goat behind each of the other doors. (Of course, Monty knows where the car is) • Suppose you choose door #1

  28. The “Monty Hall” problem • Before revealing what’s behind door #1, Monty says “Now I’m going to reveal to you one of the other doors you didn’t choose” and opens door #3 to show that there is a goat behind the door. • Monty now says: “Before I open door #1, I’m going to allow you to change your choice. Would you rather that I open door #2 instead, or do you want to stick with your original choice of door #1?” • What do you do? • Switch • Do not switch • It doesn’t matter?

  29. A Simple Analysis • Suppose you don’t switch • Then the probability you win is 1/3. • Suppose you switch • Then the probability you win is 2/3. • Why? • Monty will never reveal the car so switching is equivalent to picking two doors at the beginning. • If that’s not clear consider the case of 100 doors and then Monty opens up 98 doors and asks if you want to switch. • A more rigorous analysis is based on Bayes’ rule.

  30. Bayes’ rule – repeated in a more general form • Bayes’ Rule can be written in general as where A1, A2, …,An is a partition of the sample space. • Note there is also a continuous version involving integrals. • In the above • P(Ai) for i=1,…,n are called the prior probabilities • P(B|Ai) for i=1,…,n are called the likelihoods • P(Ai|B) for i=1,…,n are called the posterior probabilities

  31. The “Monty Hall” problem - analysis • It does not appear at first that Monty’s showing you a goat behind door #3 is at all relevant to whether the car is behind doors 1 or 2. So why should you switch? • A careful analysis is needed here. What is important to remember is that you chose door 1 and Monte knows where the car is. • Let C1, C2, C3 denote the events of the car being behind doors 1, 2, and 3 • Initially, you believe P(C1) = P(C2) = P(C3) = 1/3 • Since you chose door #1, Monty could have either opened door #2 or door #3. Denote these events by M2 and M3. • He could never open door 1; since that would defeat the purpose of the show. • Suppose the car is really behind door #2. Then in order to show you a goat, Monty would have had to open door #3. Thus P(M3|C2) = 1 • Suppose the car is really behind door #1. Then Monty had the choice of opening either door #2 or door #3. Assume that the probability of Monty’s opening door 3 in this case is ½. So P(M3|C1) = P(M2|C1)= ½. Note this result holds for any p; 0<p<1. • To make a decision, we need to find P(C1 | M3) and P(C2 | M3). • To do this we apply Bayes’ Rule.

  32. The “Monty Hall” problem - solution • Hence P(C2|M3) = 1 -1/3 = 2/3. Of course P(C3|M3)=0. • Thus your probability of getting the car is actually better if you switch doors! • It’s 2/3, rather than 1/3 if you stay with door #1 • The information Monty gives you is relevant because it is more likely Monte will chose door #3 when the car is behind door #2 (assuming you picked door #1). • Ignoring the denominator in Bayes’ rule, we can state it as • In this problem the prior probabilities are all equal so that the posterior probability is directly proportional to the likelihood • The likelihood of Monte opening door 3 is higher when the car is behind door 2 than when it is behind door 1. • Therefore the posterior probability the car is behind door 2 is higher than door 1. • We can also use this relationship to compute exact probabilities. posterior ≈ prior x likelihood

  33. Two other forms for Bayes’ Theorem Discrete Parameter Continuous Parameter In the above, f (x|θ) is either a density or a probability mass function

  34. Another and quite different Bayes’ example • Suppose demand for a product is Poisson distributed but the rate parameter λ is not known with certainty. For example a newsvendor problem where rate is not known. • The Poisson pdf is given by: • For simplicity assume a prior distribution for λ given by P(λ=10)=.4 and P(λ= 15)=.6 • In general we can assume any discrete or continuous distribution for λ. • Picking a gamma distribution gives nice formulas for the posterior. • Suppose we observe one week’s demand and it equals 11. How does this change our assessment of P(λ=10) and P(λ= 15)? • The likelihoods are f(11|10) =.1137, f(11|15) =.0663 (just plug in above formula for the Poisson • So the Posterior estimates of the probability are obtained from Bayes’ Rule in the form on slide 22: • P(λ=10|n =11) ≈ .1137• .4 = .0455 • P(λ=15|n =11) ≈ .0663 • .6 = .0398 • Since these two quantities have to sum to one • P(λ=10|n =11) = .0455/ (.0455+.0398) = 0.534 • P(λ=15|n =11) = .0398 / (.0455+.0398) = 0.466 • Another quantity of interest in the Bayesian framework is the marginal distribution. It gives the unconditional probability of observing different demand values.

  35. Marginal distributions • In some decision problems, we might need the marginal distribution, m(x), to determine relevant event probabilities. For example it could provide the demand distribution in a newsvendor problem. • In a discrete demand setting • p(λ) is the prior • f(x|λ) is the likelihood • Λ is the set of possible values for λ • In settings when λ is known and equals λ0, p(λ0)=1, so the marginal equals the likelihood. • In the above example the marginal distribution is given by • Suppose we have observed data (i.e., n=11) and wish to estimate the demand distribution taking this new information into account. To do this, we replace the prior p(λ) in (*) by the posterior probability p(λ|n=11). • Note that it would be different for different values of n. • Also we could do this many times!

  36. Beta priors and the binomial likelihood • Suppose our event has a Binomial distribution; • Special case; when n = 1 this is a Bernouilli distribution. • Recall the binomial can be viewed as the distribution of the sum of n independent Bernoulli trials. • Now suppose that the prior distribution on θ has a Beta Distribution which has density where • Emphasis: this is a distribution on θ and has support [0,1] • It is highly flexible and can represent a wide range of shapes • We denote this distribution by Beta(α,β). • Its mean is α/α+β and its variance is αβ/(α+β)2(α+β+1) • Very roughly speaking we can think of α+β is the prior number of trials and α is the prior number of successes.

  37. Beta priors and binomial likelihood –ctd. • Suppose we obtain one sample from the binomial and observe x. • Then the posterior distribution is a Beta distribution with parameters x+αand n-x+β • This is quite amazing. • Why? • This has exactly the form of Beta (x+α,n-x+β) up to a constant. • Thumbtack experiment revisited. • In earlier class we thought the prior looked pretty flat • For example suppose it is Beta(2,2). • This has mean ½ and variance 1/20 • After observing one “tack up” in one toss our posterior is Beta (3,2) which is sharper and less spread. • This has mean 3/5 and variance 1/25 • Note that the variance doesn’t depend on whether we saw a pin up or pin down. • We could have fit a prior distribution to our data and then done a formal analysis.

  38. What is our new assessment of probabilities of observing outcomes after our observation? • This is a bit more complicated • The likelihood remains as before but what we need is the marginal distribution of x; • In our example; the prior is Beta(α,β) and the likelihood is Binomial(n,x). • It is simple to show that the marginal distribution of x successes is given by where Γ(x) = (x-1)! when x is integer. • If α and β are integers the above simplifies.

  39. Using marginals and priors in decision analysis • Suppose in a decision problem we know the likelihood and the prior only. • Then our no information decision will be based on our marginal. • Now suppose we can take a sample. • Then we compute the posterior distribution and integrate the likelihood with the posterior to get the new marginal distribution. • Then we evaluate the decision with the new marginal. • In Bayesian dynamic programming we repeat this process many times.

  40. Conjugate priors • A prior distribution is said to be conjugate to a likelihood if the posterior distribution has the same form as the prior distribution. • Important examples include • Beta prior and Binomial likelihood • Normal prior and Normal likelihood • Gamma prior and Poisson likelihood. • See the above link for more examples. • Certain cases have nice marginal distributions too.

  41. Example: Balls and UrnsRaiffa - 1970 • A room contains 1000 urns; 800 Type 1 and 200 Type 2 • Type 1 urns contain 4 red and 6 white balls • Type 2 urns contain 9 red and 1 white ball • Decisions; • A1 -Pick an urn and say type 1, • A2 - Pick an urn and say type 2 • A3 - Do not play. • Payoffs • A1 -if correct, win $40, if wrong lose $20 • A2 –if correct, win $100, if wrong lose $5 • A3 - $0 • For $8 you can draw and observe a ball from the urn before guessing • What should you do?

  42. Balls and urns • Draw a decision tree for problem without sampling. • The expected value of A1= $28, A2 = $16 and A3 = $0 • So the best strategy is to guess Type I. • Expected value under PI = .8•40+.2•100 = $52 • So EVPI = $52- $28 = $24 which is the most we would pay for any information

  43. Draw a ball sub tree (without probabilities) Do not play -8 Type 1 32 Guess Type 1 Type 2 -28 -13 Red Type 1 Guess Type 2 Type 2 92 32 Type 1 Guess Type 1 Type 2 White -28 -13 Type 1 Guess Type 2 Type 2 Do not play -8 92

  44. Probabilities for sub tree • Knowns; • Prior; P(1) = .8; P(2) = .2 • Likelihood; P(red|1) = .4; P(red|2) =.9 • Unknowns • Posteriors; P(1|red); P(1|white) • Marginals; P(red); P(white) • Use Bayes’ Theorem to find them • P(red)=P(white) = .5 • P(1|red) = .64; P(1|white) = .96

  45. Draw a ball sub tree (with probabilities) Do not play -8 Type 1 32 .64 Guess Type 1 .36 Type 2 -28 -13 .64 Red Type 1 Guess Type 2 .5 Type 2 .36 92 32 Type 1 .96 Guess Type 1 Type 2 White .04 -28 .5 -13 .96 Type 1 Guess Type 2 .04 Type 2 Do not play -8 92

  46. Draw a ball sub tree (with decisions) Do not play -8 Type 1 32 .64 10.40 Guess Type 1 .36 Type 2 -28 24.80 -13 .64 Red Type 1 Guess Type 2 24.80 27.20 .5 Type 2 .36 92 32 29.60 Type 1 .96 Guess Type 1 Type 2 White .04 29.60 -28 .5 -13 .96 Type 1 -8.80 Guess Type 2 .04 Type 2 Do not play -8 92

  47. Analysis • The optimal strategy is draw a ball; if red, guess type 2, if white, guess type 1. The expected value of this strategy is $27.20 (or $0.80 less than no information case.) • So the optimal strategy for the combined problem is to not draw a ball and guess urn 1. • What is the most you would pay for drawing a ball? • Suppose instead you could draw 2 balls (with or without replacement) for $12? • Suppose for $9 you could draw one ball, look at it and decide whether or not to draw a second ball (with or without replacement) for $4.50. • These last two problems are included in HW #1. • Note this game is a bit unusual in that gamble is with respect to the prior or posterior distribution and not the marginal distribution; i.e. the only use of sampling is to update our assessment of what kind of urn we have. • Suppose instead the payoff was respect to the color of the ball drawn. In this case the prior and posterior would be auxiliary information and the decision would be based on the marginal distribution.

More Related