1 / 25

Better automated abstraction techniques for imperfect information games

Better automated abstraction techniques for imperfect information games. Andrew Gilpin and Tuomas Sandholm Carnegie Mellon University Computer Science Department. Games and information. Games can be differentiated based on the information available to the players

bbagley
Download Presentation

Better automated abstraction techniques for imperfect information games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Better automated abstraction techniques for imperfect information games Andrew Gilpin and Tuomas Sandholm Carnegie Mellon University Computer Science Department

  2. Games and information • Games can be differentiated based on the information available to the players • Perfect information games: players have complete knowledge about the state of the world • Examples: Chess, Go, Checkers • Imperfect information games: players face uncertainty about the state of the world • Examples: • A robot facing adversaries in an uncertain, stochastic environment • Almost any economic situation in which the other participants possess private information (e.g. valuations, quality information) • Almost any card game in which the other players’ cards are hidden • This class of games presents several challenges for AI • Imperfect information • Risk assessment and management • Speculation and counter-speculation

  3. Game theory • In multi-agent systems, an agent’s outcome depends on the actions of the other agents • an agent’s optimal strategy depends on the strategies of the other agents • Game theory defines how agents should play • Definition: A Nashequilibrium is a strategy for each agent such that no agent benefits by deviating to another strategy

  4. Computing equilibrium • In two-person zero-sum games, • Nash equilibria are minimax equilibria, so there is no equilibrium selection problem • Equilibrium can be found using LP • Any extensive form game (satisfying perfect recall) can be converted into a matrix game • Create one pure strategy in the matrix game for every possible pure contingency plan in the sequential game (set product of actions at information sets) • Leads to exponential blowup in number of strategies, even in the reduced normal form • Sequence form: More compact representation based on sequences of moves rather than pure strategies [von Stengel 96, Koller & Megiddo 92, Romanovskii 62] • Two-person zero-sum games with perfect recall can be solved in time polynomial in size of game tree • Not enough to solve Rhode Island Hold’em (3.1 billion nodes) or Texas Hold’em (1018 nodes)

  5. Automated abstraction [EC-06] • Automatic method for performing abstractions in a broad class of sequential games of imperfect information • Equilibrium-preserving game transformation, where certain information sets are merged and certain nodes within an information set are collapsed • GameShrink, algorithm for identifying and applying all the game transformations • Õ(n2) time • n = #nodes in the signal tree. In poker, these are possible card deals in the game • Run-time tends to be highly sublinear in the size of the game tree • Used these techniques to solve Rhode Island Hold’em • Largest poker game solved to date by over four orders of magnitude • Also developed approximate (lossy) version of GameShrink • Uses a similarity metric on nodes in the signal tree (e.g., |#wins1 - #wins2| + |#losses1 - #losses2|) and a similarity threshold

  6. Abstracted game Abstraction Compute Nash Nash equilibrium Nash equilibrium Illustration of our approach Original game

  7. Example: Applying the ordered game isomorphic abstraction transformation

  8. Application to Texas Hold’em • Two-person game tree has ~1018 leaves • Too large to run lossless GameShrink • Even after that, LP would be too large • Already too large when we applied this to first two rounds • We split the 4 betting rounds into two phases • Phase I (first 3 rounds) solved offline using new approximate version of GameShrink followed by LP • Phase II (last 2 rounds): • abstractions computed offline • real-time equilibrium computation using updated hand probabilities and anytime LP

  9. Optimized approximate abstractions • Original version of GameShrink yielded lopsided abstractions when used as an approximation algorithm • Now we instead find an abstraction via clustering: • For each level of the tree (starting from root): • For each group of hands: • use k-means clustering to split group i into ki abstract “states” • win probability as the similarity metric (ties count as half a win) • for each value of ki, compute expected error (considering hand probs) • We find, using integer programming, an abstraction (split of K into ki’s) that minimizes this expected error, subject to a constraint on the total number of states, K, at that level • (=size of the resulting LP in the zero-sum case) • Solving this class of integer programs is quite easy in practice

  10. Phase I (first three rounds) • Automated abstraction using approximate version of GameShrink • Round 1 • There are 1,326 hands, of which 169 are strategically different • We consider 15 strategically different hands • Round 2 • There are 25,989,600 distinct possible hands • GameShrink (in lossless mode for Phase I) determines that there are about a million strategically different hands • This is still too large to solve • We used GameShrink to compute an abstraction that considers 225 strategically different hands • Round 3 • There are 1,221,511,200 distinct possible hands • We consider 900 strategically different hands • This process took about 3 days running on 4 CPUs • LP solve took 7 days and 80 gigabytes using CPLEX’s barrier method (interior-point method for linear programming)

  11. Mitigating effect of round-based abstraction (i.e., having 2 phases) • For leaves in the first phase, we could assume no betting in the later rounds • Ignores implied odds • Can do better by estimating the amount of betting that occurs in later rounds • Incorporate this information into the LP for the first phase • For each possible hand strength and in each possible betting situation, we store the probability of each possible action • Mine the betting history in the later rounds from hundreds of thousands of played hands

  12. Example of betting in fourth round Player 1 has bet. Player 2 to fold, call, or raise

  13. Phase II (last two rounds) • Abstractions computed offline • Betting history doesn’t matter => ( ) situations • Simple suit isomorphisms at the root of Phase II halves this • For each such setting, we use GameShrink to generate an abstraction with 10 and 100 strategically different hands in the last two rounds, respectively • Real-time equilibrium computation (using LP) • So that our strategies are specific to particular hand (too many to precompute) • Updated hand probabilities from Phase I equilibrium using betting histories and community card history: • si is player i’s strategy, h is an information set • Conditional choice of primal vs. dual simplex • Achieve anytime capability for the player that is us • Dealing with running off the equilibrium path 52 4

  14. Precompute several databases • db5: possible wins and losses (for a single player) for every combination of two hole cards and three community cards (25,989,600 entries) • Used by GameShrink for quickly comparing the similarity of two hands • db223: possible wins and losses (for both players) for every combination of pairs of two hole cards and three community cards based on a roll-out of the remaining cards (14,047,378,800 entries) • Used for computing payoffs of the Phase I game to speed up the LP creation • handval: concise encoding of a 7-card hand rank used for fast comparisons of hands (133,784,560 entries) • Used in several places, including in the construction of db5 and db223 • Colexicographical ordering used to compute indices into the databases allowing for very fast lookups

  15. Experimental results • GS1: Game theory-based player, old version of manual abstraction, no strategy simulation in later rounds [GS 2006] • Sparbot: Game theory-based player, manual abstraction [Billings et al 2003] • Vexbot: Opponent modeling, miximax search with statistical sampling [Billings et al 2004]

  16. Summary • Competitive Texas Hold’em player automatically generated • First phase (rounds 1, 2 & 3): automated abstraction & LP solved offline, using statistical data to compute payoffs at end of round 3 • Second phase (rounds 3 & 4): abstraction precomputed automatically; LP solved in real-time using updated hand probabilities and anytime • Techniques are applicable to many sequential games of imperfect information, not just poker!

  17. Where to from here? • The top poker-playing programs are fairly equal • Recent experimental results show our player is competitive with (but not better than) expert human players • Provable approximation, e.g., ex post • Other types of abstraction • More scalable equilibrium-finding algorithms • Tournament poker [e.g. Miltersen & Sørensen 06] • More than two players [e.g. Nash & Shapley 50] Thank you 

More Related