Maximum Entropy Correlated Equilibria by L. Ortiz, R. Schapire and S. Kakade

Download Presentation

Maximum Entropy Correlated Equilibria by L. Ortiz, R. Schapire and S. Kakade

Loading in 2 Seconds...

- 81 Views
- Uploaded on
- Presentation posted in: General

Maximum Entropy Correlated Equilibria by L. Ortiz, R. Schapire and S. Kakade

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Maximum EntropyCorrelated Equilibriaby L. Ortiz, R. Schapire and S. Kakade

Course:

Applications of Information Theory to Computer Science

CSG195, Fall 2008

CCIS Department, Northeastern University

Dimitrios Kanoulas

Information Theory

Information Theory

Algorithmic Game Theory

Game Theory:

Studies the behavior of players in competitive and collaborative situations

[Christos Papadimitriou in SODA 2001]

Problem (Game):

Two cars, a red and a white one [players of the game] get to a road intersection without traffic light, at the same time.

Each driverdecides to stop (S) or go (G) [two pure strategies of the game]

Payoffs for red/white car are defined from the matrix:

GOAL for each player: Maximize his payoff

Equilibrium in a Game:

Each player picks a strategy such that:

no one wants to unilaterally deviate from this.

Payoff matrix

Nash Equilibria:

White car stops && Red car goes (pure NE)

Red car stops and White car goes (pure NE)

Both cars with 1/2 go and 1/2 stop (mixed NE)

John Nash

Movie: Beautiful Mind

There always exists a mixed strategy Nash Equilibrium.

There is a traffic light that suggest individually to the cars:

- Correlated Equilibria:
- The suggestion to go if you see green light and stop if you see red.[Mixture of two NE. For each car: ½ to go and ½ to stop]

The general problem of equilibrium computation

is fundamental in Computer Science

[Christos Papadimitriou]

Game:

Equilibrium:

Every player is “happy” by playing a [pure or mixed] strategy, which means that he cannot increase his payoff by unilaterally deviate from his strategy.

Correlated equilibrium (CE):

A joint probability distribution P(a1, . . . , an) such that:

• Every player individually receives “suggestion” from P

• Knowing P, players are happy with this “suggestion” and don’t want to deviate from this.

Nash Equilibria (NE):

Is a special case of CE: P a product distribution -> P = ΠP(ai)

NE always exists but the problem of finding a NE is hard even for a 2-players game.

[Chen & Deng]

Is the equilibrium “good” or “bad” ?

What if I want to add some properties to my equilibrium ?

- In a game we have at least one correlated equilibrium P.
- P is the joint mixed strategy
- Given P, let H(P) = Σa in AP(a) ln(1/P(a)) be its (Shannon) entropy.

Changed Game:

A player is willing to negotiate and agree to some form of “joint” strategy

with the other players.

BUT

At the same time, the player wants to try to hide as much as he can his own behavior, by making it difficult to predict.

OR

We want to suggest a joint strategy that satisfies all the players but complicates their prediction of each others’ individual strategies

The conditional entropy in information theory provides

a measure of the predictability of a random process from another

The larger the conditional entropy, the harder the prediction.

[Cover and Thomas]

Ai :the strategy of playeri(random variable)

A−i: the strategy of the rest of the players (random variable)

P(ai|a−i): the conditional mixed strategy where:

player i picks aigiven that the rest of the players pick a−I

HAi|A−i (P) = − Σa−i in A−iP(a−i) Σai in AiP(ai|a−i) logP(ai|a−i)

the conditional entropy of the strategy of player i

given the strategies of the rest of the players

SO

the larger the conditional entropy,

the harder the prediction.

MaxEnt CE:

is the joint mixed strategy P* = argmaxP in CEHAi|A−i (P)

[The probability distribution over the strategies which give a CE such that maximizes its entropy]

MaxEnt CE satisfies all the players and maximizes the hardness of predictions.

- There are some other interesting properties of MaxEnt CE, which have to do with the representation of this CE, which is much better than a arbitrary CE
- It is proposed two algorithms that converges to a MaxEnt CE and uses LP to solve the maximization problem we have
- There is also an other algorithm to compute MaxEnt CE which uses a method that in each iteration each player “learn” from the previous iteration and updates his payoff. That also converge to a MaxEnt CE [but not to a NE]

A mathematician is a device

for turning coffee into theorems.

~Paul Erdos