- By
**rafi** - Follow User

- 61 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Problems from Chapter 12' - rafi

Download Now**An Image/Link below is provided (as is) to download presentation**

Download Now

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Find a separating equilibrium

- Trial-and-error. Two possible separating strategies for Player 1:
- Choose A if type s , B if type t.
- Choose B if type s, A if type t.
- Player 2 sees action A or B, does not see type. If Player 2 believes that Player 1 will choose A if s and B if t, then Player 2’s best response strategy will be y if A, x if B.
- Given Player 2’s strategy,
- Type s would rather have Player 2 do y than x and so would choose A
- Type t would rather have Player 2 do x than y and so would do B.
- So the strategies A if type s , B if type t for Player 1 and

y if A, x if B constitute a separating Bayes-Nash equilibrium.

Beef or horse?

- In a restaurant, some customers prefer beef, some prefer horse.
- Type s likes beef, type t likes horse
- Waitress asks if you want beef or horse.
- For Player 1, Action A—Say beef, Action B—say horse
- For Player 2, Action x—Bring horse, Acton y, bring beef
- If she brings you the kind you like, you are happier. If you are happier, you leave a bigger tip and waitress is happier.
- Simplicity of this case because payoff to Player 1, given action of Player 2 depends on one’s type, but not on action A or B.

Complications

- What if the customer is a foreigner and doesn’t know the words for beef and horse?
- Babbling equilibrum?

Bayesian waitress

- Customer is an American in Japan.
- Waitress believes that only fraction p of American customers know the words for horsemeat and beef.
- She believes that 9/10 of Americans like beef don’t like horse.
- Suppose that Americans who know the words order what they prefer. Those who don’t are equally likely to say the word for horse as that for beef.
- What should she do when an American orders beef? orders horse?

If Americans who know the words order the meat they prefer and those who don’t order randomly, what is the probability that an American who orders horse wants horse?

- Try Bayes’ law. Let H mean likes horse and B mean that he prefers beef. Let h mean orders horse. Then P(H|h)=P(H and h)/P(h).
- The event H and h occurs if American likes horse and either knows the words or guesses correctly.
- P(H and h)=(1/10)(1/4)+(1/10)(3/4)(1/2)=5/80
- Probability that customer prefers beef and orders horse is P(B and h)= (9/10)(3/4)1/2=27/80.
- Then P(h)=P(H and h)+P(B and h)=32/8
- So P(H|h)=5/32.

Babble and Horsemeat

- Since P(H|h)<1/2, the waitress will have a higher expected payoff from serving the customer beef even if he orders horsemeat.
- In equilibrium, waitress always brings beef for Americans, no matter what they say.
- Americans therefore would be indifferent about what they say.

Find a separating Bayes-Nash equilibrium

- Candidate for equilibrium behavior by Sender: Saym1 if type 1, m2 if Type 2.
- If Receiver believes that this is Sender’s strategy, best response is B if m1, C if m2.
- If Receiver plays B if m1, C if m2, then Sender’s best response is m1 if type 1, m2 if Type 2.
- This is a Bayes-Nash equilibrium. The beliefs of each player about what the other player will do are confirmed by the responses.

Find a pooling equilibrium

- Suppose that Player 1 says m1 if type 1 and m1 if type 2.
- What is the best response for Player 2?
- Expected payoff from A is .6*3+.4*4=3.2
- Expected payoff from B is .6*4+.4*1=2.8
- Expected payoff from C is .6*0+.4*5=2
- So A if m1, A if m2, is a best response for 2.
- If Player 2 always goes to A, then it doesn’t matter what Player 1 says, so m1 if type 1 and m1 if type 2 is a best response.
- So is flip a coin about what to say
- This is a pooling equilibrium. Beliefs are confirmed by outcomes.

Answer to part c

- Suppose probability that Sender is type 1 is p. When is there a pooling Bayes-Nash equilibrium where receiver always plays B?
- Suppose Sender plays m1 if type 1 and m1 if type 2.
- Expected payoffs to Receiver from strategies:
- From A 3p+4(1-p)
- From B 4p+(1-p)
- From C 0p+5(1-p)
- For what p’s is payoff from B the largest?
- B beats A, 4p+1-p>3p+4(1-p) if p>3/4.
- B beats C, 4p+1-p>0+5(1-p) if p>1/2
- So Always play B is a best response to Sender’s strategy of

m1 if type 1 and m1 if the probability of type 1 if p>3/4.

- If Receiver always plays B, Sender is indifferent about what signal to send and might always say m1 or always say m2 or just “babble”.

Finitely Repeated Game

- Take any game play it, then play it again, for a specified number of times.
- A single play of the game that is repeated is known as the stage game.
- Let players observe all previous play.
- For every history that you have observed, you could have a different response.

Prisoners’ dilemma 3 times

- How many strategies that vary with other players’s actions on previous move.

First move: C or D

Second move:

- C always
- C if C, D if D
- D if C, C if D
- D always
- Third move: There are 4 possible histories of other guy’s moves. For each move by other guy there are two things you can do on your next move. That gives you 2x2x2x2=16 possible 3d move strategies.
- Then you have 2 possible first move strategies, 4 possible 2d move strategies and 16 possible third move strategies
- That is 2x4x16=128 strategies that depend on observed behavior of other guy

Prisoners’ Dilemma

Player 2

Cooperate

Defect

P

LAyER 1

Cooperate

Defect

T > R > P > S

“Temptation” “Reward” “Punishment” “Sucker”

Twice Repeated Prisoners’ Dilemma

Two players play two rounds of Prisoners’ dilemma. Before second round, each knows what other did on the first round.

Payoff is the sum of earnings on the two rounds.

Two-Stage Prisoners’ DilemmaWorking back

Player 1

Cooperate

Defect

Player 2

Cooperate

Cooperate

Defect

Defect

Player 1

Player 1

Player 1

Player 1

C

C

C

D

D

C

D

D

Pl 2

Player 1

Pl. 2

C

Pl 2

C

Pl 2

D

C

C

C

D

D

D

D

C

C

D

C

D

D

R+S

R+T

2R

2R

R+T

R+S

R+P

R+P

S+R

T+R

P+S

P+T

P+R

P+R

2P

2P

T+P

P+S

Etc…etc

Two-Stage Prisoners’ DilemmaWorking back further

Player 1

Cooperate

Defect

Player 2

Cooperate

Cooperate

Defect

Defect

Player 1

Player 1

Player 1

Player 1

C

C

C

D

D

C

D

D

Pl 2

Player 1

Pl. 2

C

Pl 2

C

Pl 2

D

C

C

C

D

D

D

D

C

C

D

C

D

D

10

21

21

10

11

11

11

11

11

11

12

1

0

22

10

21

1

12

22

0

11

11

2

12

12

1

Longer Game

- What is the subgame perfect outcome if

Prisoners’ dilemma is repeated 100 times?

- Work backwards: In last round, nothing you do affects future, so you play the dominant strategy for stage game: defect.
- Since last round is determined, nothing you do in next to last round affects future, so you play dominant strategy for stage game: defect
- Work your way back. Only subgame perfect outcome is “Defect always”.

In a repeated game that consists of four repetitions of a stage game that has a unique Nash equilibrium

- There are four subgame perfect Nash equilibria
- There are 24=16 subgame perfect Nash equilibria
- There is only one subgame perfect Nash equilibrium.
- The number of subgame perfect Nash equilibria varies, depending on the details of the game.

More generally

- In a subgame perfect equilibrium for a finitely repeated game where the stage game has a unique N.E, the moves in the last stage are determined for each person’s strategy. Given that the moves in the last stage don’t depend on anything that happened before, the Nash equilibrium in previous stage is uniquely determined to be the stage game equilibrium.
- And so it goes…All the way back to the beginning.

Games without a last round

- Two kinds of models
- Games that continue for ever
- Games that end at a random, unknown time

Infinitely repeated game

- Wouldn’t make sense to add payoffs.
- You would be comparing infinities.
- Usual trick. Discounted sums.
- Just like in calculating present values.
- We will see that cooperative outcomes can often be sustained as Nash equilibria in infinitely repeated games.

Why consider infinite games?We only have finite lives.

- Many games do not have known end time.
- Just like many human relationships.
- Simple example—A favorite of game theorists
- After each time the stage game is played there is some probability d<1 that it will be played again and probability 1-d that play will stop.
- Expected payoff “discounts” payoffs in later rounds, because game is less likely to last until then.

Cleaning house as a Repeated Prisoners’ Dilemma

- Maybe a finite game if you have a fixed lease and don’t expect to see roommate again after lease expires.
- Most relationships don’t have a known last time.
- Usually some room for “residual good will.”

In a repeated game, after each round of play, a fair coin is tossed. If it comes up heads, the game continues to another round. If it comes up tails, the game stops. What is the probability that the game is played for at least three rounds?

- 1/3
- 2/3
- 1 /4
- 1 /2
- 1/8

Calculating sums

- In a repeated game, with probability d of continuation after each round, the probability that the game is still going at round k is dk-1
- Calculate expected winnings if you receive

R so long as the game continues.

R+dR+d2R+ d3R+ d4R + ….+

=R(1+d +d2 + d3 + d4 + ….+ )

- What is this infinite sum?

Adding forever

- The series (1+d +d2 + d3 + d4 + ….+ )

Is known as a geometric series.

When |d|<1, this series converges. That is, to say, the limit as n approaches infinity of

1+d +d2 + d3 + d4 + ….+ dn exists.

Let S= 1+d +d2 + d3 + d4 + ….+

Then dS=d +d2 + d3 + d4 + ….+

And S-dS=1.

So S(1-d)=1

S=1/(1-d).

Infinitely repeated prisoners’ dilemma and the “Grim Trigger Strategy”

- Suppose 2 players play repeated prisoners dilemma, where the probability is d<1 that you will play another round after the end of each round.
- The grim trigger strategy is to play cooperate on the first round and play cooperate on every round so long as the other doesn’t defect.
- If the other defects, the grim trigger strategy plays defect on all future rounds.

When is there a symmetric SPNE where all play Grim Trigger?

- Suppose that the other player is playing Grim Trigger.
- If you play Grim Trigger as well, then you will cooperate as long as the game continues and and you will receive a payoff of R.

Your expected payoff from playing Grim Trigger if the other guy is playing Grim Trigger is therefore

R(1+d +d2 + d3 + d4 + ….+ )=R/(1-d)

What if you defect against Grim Trigger

- If you defect and the other guy is playing Grim Trigger, you will get a payoff of T>R the first time that you defect. But after this, the other guy will always play defect. The best you can do, then is to always defect as well.
- Your expected payoff from defecting is therefore T+ P(d +d2 + d3 + d4 + ….+ )

=T+Pd/1-d

Cooperate vs Defect

- If other guy is playing Grim trigger and nobody has yet defected, your expected payoff from playing cooperate is R/(1-d)
- If other guy is playing Grim trigger and nobody has yet defected, your expected payoff from playing defect is T+Pd/(1-d)
- Cooperate isR/(1-d) better for you if

R/(1-d)>T+Pd/(1-d) which implies d>(T-R)/(T-P)

- Example If T=10, R=5, P=2, then condition is d>5/8.
- If d is too small, it pays to “take the money and run”

Other equilbria?

- Grim trigger is a SPNE if d is large enough.
- Are there other SPNEs?
- Yes, for example both play Always Defect is an equilibrium.
- If other guy is playing Always Defect, what is your best response in any subgame?
- Another is Play Defect the first 10 rounds, then play Grim Trigger.

Tit for Tat

- What is both players play the following strategy in infinitely repeated P.D?
- Cooperate on the first round. Then on any round do what the other guy did on the previous round.
- Suppose other guy plays tit for tat.
- If I play tit for tat too, what will happen?

Payoffs

- If you play tit for tat when other guy is playing tit for tat, you get expected payoff of

R(1+d +d2 + d3 + d4 + ….+ )=R/(1-d)

- Suppose instead that you choose to play “Always defect” when other guy is tit for tat.
- You will get T+ P(d +d2 + d3 + d4 + ….+ )

=T+Pd/1-d

Same comparison as with Grim Trigger. Tit for tat is a better response to tit for tat than always defect if

d>(T-R)/(T-P)

Another try

- Sucker punch him and then get him to forgive you.
- If other guy is playing tit for tat and you play

D on first round, then C ever after, you will get payoff of T on first round, S on second round, and then R for ever.

Expected payoff is T+ Sd+d2R(1+d +d2 + d3 + d4 + ….+ )=T+ Sd+d2R/(1-d).

Which is better?

- Tit for tat and Cheat and ask forgiveness give same payoff from round 3 on.
- Cheat and ask for forgiveness gives T in round 1 and S in round 2.
- Tit for tat give R in all rounds.
- So tit for tat is better if

R+dR>T+dS, which means

d(R-S)>T-R or

d>(T-R)(R-S)

If T=10, R=6, and S=1, this would mean if d>4/5.

But if T=10, R=5, and S=1, this would be the case only if d>5/4, which can’t happen. In this case, tit for tat could not be a Nash equilibrium.

Download Presentation

Connecting to Server..