1 / 32

Georgia Kastidou David R. Cheriton School of Computer Science University of Waterloo

Simple search methods for finding a Nash equilibrium Ryan Porter, Eugene Nudelman, and Yoav Shoham Games and Economic Behavior , Vol. 63, Issue 2. pp. 642-661, 2004. Georgia Kastidou David R. Cheriton School of Computer Science University of Waterloo. Outline. Problem Contribution

jcathy
Download Presentation

Georgia Kastidou David R. Cheriton School of Computer Science University of Waterloo

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Simple search methods for finding a Nash equilibriumRyan Porter, Eugene Nudelman, and Yoav ShohamGames and Economic Behavior, Vol. 63, Issue 2. pp. 642-661, 2004 Georgia Kastidou David R. Cheriton School of Computer Science University of Waterloo

  2. Outline • Problem • Contribution • Algorithm for 2-player game • Experimental Results • Algorithm for n-player game • Experimental Results • Conclusions • Future Work • Take Home Message

  3. Problem we will tackle… • Consider a n-player normal-form game G=N, (Ai), (ui) where: • N=[1,..n] is the set of players • Ai=[ai1,…,aimi] is the set of actions available to player i • ui:A1x…xAnR is the utility function for each player • Each player selects a mixed strategy from the set of available strategies: • “Support” of a mixed strategy pi is the set of all actions ai in Ai such that pi(ai)>0. • x=(x1,..,xn): xi is the size of agent’s i support • The expected utility for player i for a strategy profile p=(p1,...,pn) is: where: • Problem: Design an algorithm which can find a Nash Equilibrium for a normal form game • A strategy profile p* in P is a Nash Equilibrium (NE) if:

  4. Development of Algorithms • Two extremes: • 1st: Aims to design a low complexity algorithm • Gain deep insight in the structure of the problem • Design highly specialized algorithms • 2nd: Aims to design more simple algorithms with an “acceptable” complexity • Identify shallow heuristics • Hoping that given the increasing computing power they will be sufficient. • focus on more “common” problems • Which one is better? • Although the first is more interesting, in practice in a number of cases the second is preferred either because is simpler or because it outperforms the first. • Note: There are cases of optimal algorithms that have never been implemented because they are too complicate.

  5. Background/Related Work • Nash equilibrium is an important concept in game theory • “little is known about the problem of computing a sample NE in a normal-form game” • Normal form game is guaranteed to have at least one NE • It does not fall into a standard complexity class (Papadimitriou, 2001) • it cannot be cast as a decision problem.

  6. Background/Related Work • 2-player game: • Lemke-Howson, (1964) • Dickhaut and Kaplan (1991) • Enumerates all possible pairs of supports for 2-play game • For each pair it solves a feasibility program • n-player game: • Simplicial Subdivision (van der Laan et al., 1987) • Approximates a fixed point of a function which is defined on a simplotope • Govindan and Wilson (2003) • first perturbs a game to one that has a known equilibrium, and • then traces the solution back to the original game as the magnitude of the perturbation approaches zero.

  7. Background/Related Work • GAMUT: a “recently” (2004) introduced computational testbed for game theory “Run the GAMUT:A Comprehensive Approach to Evaluating Game-Theoretic Algorithms” by E. Nudelman et al.

  8. What the author propose and what’s their contribution? • Propose: • heuristic-based algorithms for 2-player games and for n-player games • explore the space of support profiles using a backtracking procedure to instantiate the support for each player separately. • test using a variety of different distributions • Use of GAMUT (computational testbed for game theory) • Contribution: • in a big number of cases the proposed algorithms outperform the algorithm of Lemke-Howson on 2-player games and the Siplicial Subdivision on n-player games.

  9. The Proposed Algorithms • explore “support” profiles: • pure strategies played with nonzero probability • use backtracking procedures • are biased towards simple solutions • preference for small supports • based on the observation that a number of games in the past proved to have at least one simple solution. • e.g. for n = 2, the probability that there exists a NE consistent with a particular support profile varies inversely with the size of the supports, and is zero for unbalanced support profiles.

  10. Proposed Algorithm for 2-players game

  11. Proposed Algorithm for 2-players game υi : expected utility of agent i in an equilibrium

  12. Proposed Algorithm for 2-players game • The first two classes of constraints require that: • each player must be indifferent between all actions within his/her support, and • must not strictly prefer an action outside of his/her support.

  13. Experimental results • The authors consider games from a number of different distributions • D18: most common one • D5, D6, and D7 are also important distributions

  14. Experimental ResultsAlgorithms for 2-player games • Experiment-Setup: • 2-player, 300-action games drawn from 24 of GAMUT’s 2- player distributions. • executed on 100 games drawn from each distribution. • First diagram: • compares the unconditional median running times of the algorithms, • might reflect the fact that there is a greater than 50% chance that the distribution will generate a game with a pure • Second diagram: • Compares the percentage of instances solved • Third diagram: • the average running time conditional on solving an instance

  15. Experimental ResultsAlgorithms for 2-player games Compares the unconditional median running times of the algorithms. (“Might reflect the fact that there is a greater than 50% chance that the distribution will generate a game with a pure”)

  16. Experimental ResultsAlgorithms for 2-player games Compares the percentage of instances solved

  17. Experimental ResultsAlgorithms for 2-player games Compares the average running time conditional on solving an instance (unconditional average running time)

  18. Experimental ResultAlgorithms for 2-player games Compare the scaling behavior as the number of actions increases (unconditional average running time)

  19. Experimental ResultAlgorithms for 2-player games • Covariance Games • neither of the algorithms solved any of the games in another “Covariance Game” distribution in which ρ =−0.9,

  20. Proposed Algorithm for n-players games • Uses a general backtracking algorithm to solve a constraint satisfaction problem (CSP) for each support size profile • The variables in each CSP are: • the supports Si, and • the domain of each Siis the set of supports of size xi. • Constraints: • no agent plays a conditionally dominated action.

  21. Proposed Algorithm for n-players games • IRSDS: • Input a domain for each player’s support. • For each agent whose support has been instantiated the domain contains only that instantiated support, • For each other agent i it contains all supports of size xithat were not eliminated in a previous call to this procedure. • On each pass of the repeat-until loop, • every action found in at least one support in a player’s domain is checked for conditional domination. • If a domain becomes empty after the removal of a conditionally dominated action, • the current instantiations of the Recursive-Backtracking are inconsistent, and IRSDS returns failure. • IRSDS repeats until it either returns failure or iterates through all actions of all players without finding a dominated action.

  22. Proposed Algorithm for n-players games

  23. Algorithm 2 For all x=(x1,..xn) sorted in increasing order first by: and then by: R-B … IRSDS IRSDS IRSDS R-B R-B R-B … IRSDS IRSDS IRSDS R-B R-B R-B Failed IRSDS R-B: Recursive Backtracking IRSDS:Iterated Removal of Strictly Dominated Strategies RBT Failed

  24. Experimental Results: n-player games • Experiment-Setup: • 6-player, 5-action games drawn from 22 of GAMUT’s n-player distributions. • 15,625 outcomes and 93,750 payoffs • executed on 100 games drawn from each distribution. • First diagram: • compares the unconditional median running times of the algorithms, • might reflect the fact that there is a greater than 50% chance that the distribution will generate a game with a pure • Second diagram: • Compares the percentage of instances solved • Third diagram: • the average running time conditional on solving an instance

  25. Experimental Results: n-player games • Compares the unconditional median running times of the algorithms Compares the percentage of instances solved

  26. Experimental Results: n-player games Compares the average running time conditional on solving an instance

  27. Experimental Results: n-player games • Compare the scaling behavior: number of players constant at 6 number of actions varies. • (unconditional average running time) • Compare the scaling behavior: number of players varies, number of actions constant 5. • (unconditional average running time)

  28. Experimental ResultsPercentage of Pure Strategy NE • (2-player game) • n-player game

  29. Experimentals ResultsAverage measure of support balanced • 6-player, 5-action games • 2-player, 300-action games

  30. Conclusions • Propose algorithms that use backtracking approaches to search the space of support profiles, favoring supports that are small and balanced. • Both algorithms outperform the current state of the art. • The most difficult games • “Covariance Game” model, as the covariance approaches its minimal value • hard because authors found that: • as the covariance decreases, the number of equilibria decreases, and • the equilibria that do exist are more likely to have support sizes near one half of the number of actions

  31. Future Work • Employ more sophisticated CSP techniques • Explore local search, in which the state space is the set of all possible supports, and the available moves are to add or delete an action from the support of a player • Study the games that are generated by the Covariance Game distribution

  32. Take home message • Studying the results of complicated problems can lead to observations that although might not provide ideas to find optimal solutions can provide insights on how to improve current approaches. • The selection of the tests and the parameters that will be examined very important. • Not only because they can show that your algorithm is working… • E.g. “Covariance Game” model might proved a good starting point for game theoretic algorithms

More Related