1 / 80

Chapter 3

Chapter 3. Sequential Decisions. “Life must be understood backward, but … it must be lived forward.” - Soren Kierkegaard. Terminology. tree terminal node (leaf) backward graph (edges reversed) decision graph – each arc represents a choice. Must be acyclic

Download Presentation

Chapter 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 Sequential Decisions “Life must be understood backward, but … it must be lived forward.” - Soren Kierkegaard

  2. Terminology • tree • terminal node (leaf) • backward graph (edges reversed) • decision graph – each arc represents a choice. Must be acyclic • Payoff can be at terminal node or along edges • Theorem: any subpath of an optimal path is optimal

  3. Games of Chicken • A monopolist faces a potential entrant • Monopolist can accommodate or fight • Potential entrant can enter or stay out Monopolist Potential Entrant

  4. Equilibrium • Use best response method to find equilibria Monopolist Potential Entrant

  5. Importance of Order • Two equilibria exist • ( In, Accommodate ) • ( Out, Fight ) • Only one makes temporal sense • Fight is a threat, but not credible – because once I decide to enter, you lose if you fight! • Not sequentially rational • Simultaneous outcomes may not make sense for sequential games.

  6. out 0 , 100 E in -50 , -50 fight M acc 50 , 50 Sequential Games The Extensive Form

  7. -50 , -50 fight M acc 50 , 50 Looking Forward… • Entrant makes the first move: • Must consider how monopolist will respond • If enter: • Monopolist accommodates

  8. out 0 , 100 E in M acc 50 , 50 … And Reasoning Back • Now consider entrant’s move • Only ( In, Accommodate ) is sequentially rational

  9. Sequential Rationality COMMANDMENT Look forward and reason back. Anticipate what your rivals will do tomorrow in response to your actions today

  10. Solving Sequential Games • Start with the last move in the game • Determine what that player will do • Trim the tree • Eliminate the dominated strategies • This results in a simpler game • Repeat the procedure – called roll back.

  11. Example 3.9 R 2 2 4 C A B 1 G 4 3 2 4 3 D T F K 2 E 3 1 1 2 2 P N S M Q L Pick nodes one up from leaves and select best choice, reduce graph.

  12. Example 3.9 R 2 2 4 C A B 1 G 4 3 2 4 3 D T F K 2 E 1 1 N S L Pick nodes at last choice point select best (lowest) choice, reduce graph. Cost is sum along edges of path.

  13. Example 3.9 R 2 2 4 C A B 1 G 3 2 D F K 2 1 S L Repeat: Pick nodes at lasta choice point and select best (lowest) choice, reduce graph. Cost is sum along edges of path.

  14. Example 3.9 – backwards inductionalso called roll back R 2 C 1 G 2 S Repeat: Pick nodes at lasta choice point and select best (lowest) choice, reduce graph. Cost is sum along edges of path.

  15. Voting • Majority rule results – no conclusion: • B>G>R G>R>B R>B>G • B beats G ; G beats R ; R beats B • What if you want “R” to Win? • B vs. G (B wins) then winner vs. R  R • Problem: • Everyone knows you want “R” B vs. G then winner vs. R? Good Luck! • Better chance: R vs. G, then winner versus B Interesting how voting order makes a winner in a no winner case!

  16. Extensive Form • B>G>R G>R>B R>B>G B B B vs. R R R R vs. B R R wins B B B B vs. G G G

  17. Looking Forward B B B vs. R A majority prefers R to B R R B B A majority prefers B to G B vs. G G G

  18. Trim The Tree B vs. R R R R vs. B R B B B B vs. G

  19. Rollback in Voting and “Being Political” • Not necessarily good to vote your true preferences • Amendments to make bad bills worse • Crossing over in open primaries • “Centrist” voting in primaries • Supporting your second-best option • STILL – Outcome predetermined • AGENDA SETTING!

  20. Predatory Pricing • An incumbent firm operates in three markets, and faces entry in each • Market 1 in year 1, Market 2 in year 2, etc. • Each time, I can slash prices, or accommodate the new entry • What should I do the first year?

  21. Predatory Pricing E3 E2 out out E1 fight fight M in in M acc acc

  22. 0 , 100 + previous out E3 -50 , -50 + previous fight M in acc 50 , 50 + previous Predatory Pricing • The end of the tree: year 3 • In year 3: ( In, Accommodate )

  23. Predatory Pricing • Since the Incumbent will not fight Entrant 3, he will not fight Entrant 2 • Same for Entrant 1 • Only one “Rollback Equilibrium” • All entrants play In • Incumbent plays Accommodate • Why do we see predatory pricing? • predatory pricing : An anti-competitive measure employed by a dominant company to protect market share from new or existing competitors. Predatory pricing involves temporarily pricing a product low enough to end a competitive threat.

  24. Sophie’s choice • Sophie has $100 and a long boring holiday without exciting University lectures • She can watch videos or play Nintendo games • Videos are $4 each • Nintendo games are $5 each • She has $100 • What is Sophie’s choice?

  25. Standard price taker budget set Qvideos First find Sophie’s choice set and budget line. 25 Qgames 20

  26. Convex, smooth preferences U=120 Qvideos Then show her preferences. Note Sophie’s perspective both videos and Nintendo games are ‘goods’ (desirables). Utility function is U U=140 Qgames U=80 U=100

  27. Put them together U=120 Qvideos First – note that as both videos and games are goods and there is nothing else for Sophie to spend her money on, she will consume on her budget line 25 solution space U=140 Qgames 20 U=80 U=100

  28. Where on the budget line? U=120 Start of with 25 videos and 0 games. This is a bundle on her budget line. But can she do better? Yes! If she buys less videos and uses some money to buy games, she moves to a higher indifference curve, so she is better off. Qvideos 25 U=140 Qgames 20 U=80 U=100

  29. Where on the budget line? U=95 Qvideos What if we start with 20 games and no videos? Can Sophie make a better choice for herself? Yes! If she buys fewer games and uses some of her money to buy videos she moves to higher indifference curves. 25 U=100 Qgames 20 U=85 U=90

  30. So she prefers a mixture of videos and movies. But what mix is best? The best bundle for Sophie will be where her indifference curve is just tangent to her budget line. Here that is where she has 10 videos and 12 movies Qvideos 25 10 Qgames 12 20

  31. Tangency condition To see this, lets magnify her budget line and indifference curves around the tangency point Qvideos 25 10 Qgames 12 20

  32. Tangency condition Here is the magnified version. Notice that she can move anywhere on her budget line. But if Sophie stops before she reaches the tangency bundle then she is not maximising her utility

  33. Tangency condition Only when she reaches her ‘tangency’ bundle is she on her highest indifference curve (U=95).

  34. Tangency condition Further she cannot do better than this bundle. For example, she cannot reach the U=95.5 indifference curve. She doesn’t have enough money.

  35. Summary so far • So: • Sophie will choose her optimal bundle where her indifference curve is just tangent to her budget line. • This gets her on her highest possible indifference curve given her budget. • But why does this make economic sense?

  36. Founders of Probability Theory Pierre Fermat (1601-1665, France) Blaise Pascal (1623-1662, France) They laid the foundations of the probability theory in a correspondence on a dice game.

  37. Prior, Joint and Conditional Probabilities P(A) = prior probability of A P(B) = prior probability of B P(A, B) = joint probability of A and B P(A|B) = conditional (posterior) probability of A given B P(B|A) = conditional (posterior) probability of B given A

  38. Product rule: P(A, B) = P(A | B) P(B) or equivalently P(A, B) = P(B | A) P(A) Sum rule: P(A) = ΣB P(A, B) = ΣB P(A | B) P(B) if A is conditionalized on B, then the total probability of A is the sum of its joint probabilities with all B Probability Rules

  39. Statistical Independence Two random variables A and B are independent iff: • P(A, B) = P(A) P(B) • P(A|B) = P(A) • P(B|A) = P(B) knowing the value of one variable does not yield any information about the value of the other

  40. Statistical Dependence - Bayes Thomas Bayes (1702-1761, England) “Essay towards solving a problem in the doctrine of chances” published in the Philosophical Transactions of the Royal Society of London in 1764.

  41. P(A|B) = P(AB) / P(B) P(B|A) = P(AB) / P(A) Bayes Theorem => P(AB) = P(A|B) P(B) = P(B|A) P(A) P(B|A) P(A) => P(A|B) = P(B)

  42. P(B|A) P(A) P(A|B) = P(B) Bayes Theorem - Causality Diagnostic: P(Cause|Effect) = P(Effect|Cause) P(Cause) / P(Effect) Pattern Recognition: P(Class|Feature) = P(Feature|Class) P(Class) / P(Feature)

  43. Bayes Formula and Classification Conditional Likelihood of the data given the class Prior probability of the class before seeing anything Posterior probability of the class after seeing the data Unconditional probability of the data

  44. Medical Example • Probability you have a disease is .002 • If you have the disease, the probability that the test is positive is .97 • If you don’t have the disease, the probability that the test is positive is .04 • What is the probability of a positive test? • p(+test) = .002*.97 + .998*.04

  45. Medical example p(+disease) = 0.002 p(+test | +disease) = 0.97 p(+test | -disease) = 0.04 p(+test) = p(+test | +disease) * p(+disease) + p(+test | -disease) * p(-disease) = 0.97 * 0.002 + 0.04 * 0.998 = 0.00194 + 0.03992 = 0.04186 p(+disease | +test) = p(+test | +disease) * p(+disease) / p(+test) = 0.97 * 0.002 / 0.04186 = 0.00194 / 0.04186 = 0.046 p(-disease | +test) = p(+test | -disease) * p(-disease) / p(+test) = 0.04 * 0.998 / 0.04186 = 0.03992 / 0.04186 = 0.953

  46. Bayesian Decision Theory cont. • Fish Example: • Each fish is in one of 2 states: sea bass or salmon • Let w denote the set of possible outcomes • w = w1for sea bass • w = w2 for salmon

  47. Bayesian Decision Theory cont. •  The State of nature is unpredictable. • w is a variable that must be described probabilistically. •  If the catch produced as much salmon as sea bass the next fish is equally likely to be sea bass or salmon. • a priori: before the event • ex post: after the event •  Define • P(w1 ) : a priori probability that the next fish is sea bass • P(w2 ):a priori probability that the next fish is salmon.

  48. Bayesian Decision Theory cont. • If other types of fish are irrelevant: P( w1 ) + P( w2 ) = 1. •  Prior probabilities reflect our prior knowledge (e.g. time of year, fishing area, …) • Simple decision Rule: • Make a decision (about the next fish caught) without seeing the fish. • Decide w1 if P( w1 ) > P( w2 ); w2 otherwise. • OK if deciding for one fish • If several fish, all assigned to same class. • If we knew something about the fish (like how light it looked), could we make a better decision?

  49. Bayesian Decision Theory cont. •  In general, we will have some features we can use to help us predict. •  Feature: lightness reading = x • Different fish yield different lightness readings (x is a random variable)

  50. Bayesian Decision Theory cont. • Define   p(x|w1) = Class-Conditional Probability Density Probability density function for x given that the state of nature is w1 The probability that you have light reading x when fish is w1 • The difference between p(x|w1 ) and p(x|w2 ) describes the difference in lightness between sea bass and salmon.

More Related