Loading in 5 sec....

Maximum Entropy Correlated Equilibria by L. Ortiz, R. Schapire and S. KakadePowerPoint Presentation

Maximum Entropy Correlated Equilibria by L. Ortiz, R. Schapire and S. Kakade

- By
**ull** - Follow User

- 107 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Maximum Entropy Correlated Equilibria by L. Ortiz, R. Schapire and S. Kakade' - ull

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Maximum EntropyCorrelated Equilibriaby L. Ortiz, R. Schapire and S. Kakade

Course:

Applications of Information Theory to Computer Science

CSG195, Fall 2008

CCIS Department, Northeastern University

Dimitrios Kanoulas

Maximum Entropy Correlated Equilibria

Information Theory

AlgorithmicGame Theory

Definitions

Game Theory:

Studies the behavior of players in competitive and collaborative situations

[Christos Papadimitriou in SODA 2001]

Game(example: Road Intersection)

Problem (Game):

Two cars, a red and a white one [players of the game] get to a road intersection without traffic light, at the same time.

Each driverdecides to stop (S) or go (G) [two pure strategies of the game]

Payoffs for red/white car are defined from the matrix:

GOAL for each player: Maximize his payoff

Game(example: Road Intersection)

Equilibrium in a Game:

Each player picks a strategy such that:

no one wants to unilaterally deviate from this.

Game(example: Road Intersection)

Payoff matrix

Nash Equilibria:

White car stops && Red car goes (pure NE)

Red car stops and White car goes (pure NE)

Both cars with 1/2 go and 1/2 stop (mixed NE)

Game(example: Road Intersection)

There always exists a mixed strategy Nash Equilibrium.

Game(example: Road Intersection)

There is a traffic light that suggest individually to the cars:

- Correlated Equilibria:
- The suggestion to go if you see green light and stop if you see red.[Mixture of two NE. For each car: ½ to go and ½ to stop]

Quote

The general problem of equilibrium computation

is fundamental in Computer Science

[Christos Papadimitriou]

Some Definitions

Game:

Some Definitions

Equilibrium:

Every player is “happy” by playing a [pure or mixed] strategy, which means that he cannot increase his payoff by unilaterally deviate from his strategy.

Correlated equilibrium (CE):

A joint probability distribution P(a1, . . . , an) such that:

• Every player individually receives “suggestion” from P

• Knowing P, players are happy with this “suggestion” and don’t want to deviate from this.

Nash Equilibria (NE):

Is a special case of CE: P a product distribution -> P = ΠP(ai)

NE always exists but the problem of finding a NE is hard even for a 2-players game.

[Chen & Deng]

Good vs. Bad Equilibria

Is the equilibrium “good” or “bad” ?

What if I want to add some properties to my equilibrium ?

Connection between:Algorithmic Game TheoryANDInformation Theory

Maximum EntropyCorrelated Equilibria[MaxEnt CE]

- In a game we have at least one correlated equilibrium P.
- P is the joint mixed strategy
- Given P, let H(P) = Σa in AP(a) ln(1/P(a)) be its (Shannon) entropy.

The property of this Equilibrium

Changed Game:

A player is willing to negotiate and agree to some form of “joint” strategy

with the other players.

BUT

At the same time, the player wants to try to hide as much as he can his own behavior, by making it difficult to predict.

OR

We want to suggest a joint strategy that satisfies all the players but complicates their prediction of each others’ individual strategies

The property of this Equilibrium

The conditional entropy in information theory provides

a measure of the predictability of a random process from another

The larger the conditional entropy, the harder the prediction.

[Cover and Thomas]

Ai : the strategy of playeri(random variable)

A−i: the strategy of the rest of the players (random variable)

P(ai|a−i): the conditional mixed strategy where:

player i picks aigiven that the rest of the players pick a−I

The property of this Equilibrium

HAi|A−i (P) = − Σa−i in A−iP(a−i) Σai in AiP(ai|a−i) logP(ai|a−i)

the conditional entropy of the strategy of player i

given the strategies of the rest of the players

SO

the larger the conditional entropy,

the harder the prediction.

The property of this Equilibrium

MaxEnt CE:

is the joint mixed strategy P* = argmaxP in CEHAi|A−i (P)

[The probability distribution over the strategies which give a CE such that maximizes its entropy]

MaxEnt CE satisfies all the players and maximizes the hardness of predictions.

Additional Info

- There are some other interesting properties of MaxEnt CE, which have to do with the representation of this CE, which is much better than a arbitrary CE
- It is proposed two algorithms that converges to a MaxEnt CE and uses LP to solve the maximization problem we have
- There is also an other algorithm to compute MaxEnt CE which uses a method that in each iteration each player “learn” from the previous iteration and updates his payoff. That also converge to a MaxEnt CE [but not to a NE]

Download Presentation

Connecting to Server..