1 / 29

Common Knowledge: The Math

Common Knowledge: The Math. We need a way to talk about “private information” . We will use an information structure < Ω , π 1 , π 2 , μ > . Ω is the (finite) set of “states” of the world ω  Ω is a possible state of the world E ⊆ Ω is an event Examples:

milton
Download Presentation

Common Knowledge: The Math

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Common Knowledge: The Math

  2. We need a way to talk about “private information”

  3. We will use an information structure < Ω, π1, π2, μ >

  4. Ω is the (finite) set of “states” of the world ωΩ is a possible state of the world E⊆Ω is an event Examples: Ω = {(hot, rainy), (hot, sunny), (cold, rainy), (cold, sunny)} ω=(hot,rainy) E={(hot,rainy),(hot,sunny)}

  5. πi “partitions” the set of states for player i into those he can and those he cannot distinguish. E.g., Suppose player 1 is in a basement with a thermostat but no window π1 = {{(hot, rainy), (hot, sunny)}, {(cold, rainy), (cold, sunny)}} We write: π1((hot, sunny)) = π1((hot, rainy)) π1((cold, sunny)) = π1((cold, rainy))

  6. Suppose player 2 is in a high-rise with a window but no thermostat π2 = {{(hot, rainy), (cold, rainy)}, {(hot, sunny), (cold, sunny)}} π2((hot, sunny)) = π2((cold, sunny)) π2((hot, rainy)) = π2((cold, rainy))

  7. We let μ represent the “common prior” probability distribution over Ω I.e. μ: Ω [0, 1] s.t.Σμ(ω) = 1 We interpret μ(ω) as the probability state ω occurs E.g., μ((hot, sunny)) = .45 μ((hot, rainy)) = .05 μ((cold, sunny)) = .05 μ((cold, rainy)) = .45

  8. We can likewise write μ(E) or μ(E|F), using Bayes Rule. E.g., μ((hot, sunny)|hot) = = .9

  9. Now, we want to investigate how this private information can influence play in a game. We assume that in every state of the world the players play the same coordination game. (But they may play different actions in different states!)

  10. A B a, a b, c A c, b d, d B a > c , d > b (Interpret?)

  11. What are the strategies in this new game? The payoffs? si: πi {A, B} e.g. s1({(hot, rainy), (hot, sunny)})=A s1({(cold, rainy), (cold, sunny)})=B s2({(hot, sunny), (cold, sunny)})=A s2({(hot, rainy), (cold, rainy)})=B Not s1({(cold, rainy)})=B s1({(hot, rainy), (hot, sunny), (cold,sunny)})=A

  12. Ui: s1 × s2  lR s.t.Ui(s1, s2) = Σωμ(ω) Ui(s1(π1(ω), s2(π2(ω))) How did we get this? Expected Utility =Weighted average of payoff in each state (given common priors, and prescribed action in each state)

  13. A B E.g. 1, 1 0,0 s1({(hot, rainy), (hot, sunny)})=A s1({(cold, rainy), (cold, sunny)})=B s2({(hot, sunny), (cold, sunny)})=A s2({(hot, rainy), (cold, rainy)})=B A 0,0 5,5 B

  14. U1(s1, s2) =μ((hot,sunny)) U1(s1(π1((hot,sunny), s2(π2((hot,sunny)))+… =μ((hot,sunny)) U1(s1({(hot,rainy),(hot,sunny)}, s2({(hot,sunny),(cold,sunny)})+… =μ((hot,sunny)) U1(A, A)+… =.45×1+.05×0 + .05×0 + .45×5 =2.7

  15. What is the condition for NE? Same as before… (s1,s2) is NE iff U1(s1,s2) ≥ U1(s1’,s2) for all s1’ U2(s1,s2) ≥ U2(s1,s2’) for all s2’

  16. A B E.g. 1, 1 0,0 s1({(hot, rainy), (hot, sunny)})=A s1({(cold, rainy), (cold, sunny)})=B s2({(hot, sunny), (cold, sunny)})=A s2({(hot, rainy), (cold, rainy)})=B A 0,0 5,5 B Is (s1,s2) NE?

  17. U1(s1,s2)=2.7 Let’s consider all possible deviations for player 1 Let s’1({(hot, rainy), (hot, sunny)})=s’1 ({(cold, rainy), (cold, sunny)})=A • U1(s’1,s2)=.45*1+.05*0+.05*1+.45*0=.45 • U1(s’1,s2)<U1(s1,s2) Let s’1({(hot, rainy), (hot, sunny)})=s’1 ({(cold, rainy), (cold, sunny)})=B U1(s’1,s2)=2.5 U1(s’1,s2)<U1(s1,s2) Let s’1({(hot, rainy), (hot, sunny)})=B s’1 ({(cold, rainy), (cold, sunny)})=A • U1(s’1,s2)=.3 • U1(s’1,s2)<U1(s1,s2) (Similarly for player 2)  (s1,s2) is NE

  18. Now assume μ((hot, sunny)) = .35 μ((hot, rainy)) = .15 μ((cold, sunny)) = .35 μ((cold, rainy)) = .15 Is (s1,s2) still NE?

  19. U1(s1,s2)=.35*1+.15*0+.15*0+.35*5=2.1 Consider: s’1({(hot, rainy), (hot, sunny)})=s’1 ({(cold, rainy), (cold, sunny)})=B U1(s’1,s2)=.35*0+.15*5+.15*0+.35*5=2.5 U1(s’1,s2)>U1(s1,s2) (s1,s2) isn’t NE (in fact, can similarly argue no other (s1,s2) are NE that condition action on information!)

  20. So sometimes it is possible to condition one’s action on one’s information, and sometimes it isn’t Can we characterize, for any coordination game and information structure, when this is possible? It turns out the answer will have to do with “higher order beliefs.” To see that we will need to define concepts called p-beliefs and common p-beliefs

  21. We say ip-believes E at ω, if μ(E|πi(ω)) ≥ p E.g., consider our original information structure and let E={(hot,sunny),(cold,sunny)}  player 1 .7-believes E at (hot,sunny) μ({(hot,sunny),(cold,sunny)}|π1 ((hot,sunny))) = μ({(hot,sunny),(cold,sunny)}|{(hot,sunny),(hot, rainy)}) = (.45+0)/.5=9/10>.7

  22. I.e.Both p-believe EBoth p-believe that both p-believe EBoth p-believe that both p-believe that both p-believe E…

  23. Suppose (s1, s2) is a Nash equilibrium such that for i=1,2 si(ω) = A for all ω E si(ω) = B for all ω F Then Ω/F is common p-belief at E, and Ω/E is common (1-p)-belief at F

  24. Intuition… If 1 is playing A when she observes the event E, then he better be quite sure it isn’t F (b/c 2 plays B on F) How sure? At least p! Is this enough? What if 1 p-believes that it isn’t F, but doesn’t think 2 p-believes it isn’t F? Well then 1 thinks 2 will play B! How confident does 1 have to be, therefore, that 2 p-believes it isn’t F? At least p! …

  25. If Ω/F is common p-belief at E, and Ω/E is common (1-p)-belief at F Then there exists a Nash Equilibrium (s1, s2) s.t. si(ω) = A for all ω E si(ω) = B for all ω F

  26. Note: -higher order beliefs matter IFF my optimal choice depends on your choice! (coordination game, hawk dove game, but not signaling game!) -Even if game state dependent!

More Related