1 / 57

Learning in Games

Learning in Games. Scott E Page Russell Golman & Jenna Bednar University of Michigan-Ann Arbor. Overview. Analytic and Computational Models Nash Equilibrium Stability Basins of Attraction Cultural Learning Agent Models. Domains of Interest. Computable. Analytic. Learning/Adaptation

Download Presentation

Learning in Games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning in Games Scott E Page Russell Golman & Jenna Bednar University of Michigan-Ann Arbor

  2. Overview Analytic and Computational Models Nash Equilibrium • Stability • Basins of Attraction Cultural Learning Agent Models

  3. Domains of Interest Computable Analytic

  4. Learning/Adaptation Diversity Interactions/Epistasis Networks/Geography

  5. Equilibrium Science We can start by looking at the role that learning rules play in equilibrium systems. This will give us some insight into whether they’ll matter in complex systems.

  6. Players

  7. Actions Cooperate: C Defect: D

  8. Payoffs C D C D

  9. Best Responses C D C D

  10. Best Responses C D C D

  11. Nash Equilibrium C D C 2,2 D

  12. “Equilibrium” Based Science Step 1: Set up game Step 2: Solve for equilibrium Step 3: Show how equilibrium depends on parameters of model Step 4: Provide empirical support

  13. Is Equilibrium Enough? Existence: Equilibrium exists Stability: Equilibrium is stable Attainable: Equilibrium is attained by a learning rule.

  14. Examples Best respond to current state Better respond Mimic best Mimic better Include portions of best or better Random with death of the unfit

  15. Stability Stability can only be defined relative to a learning dynamic. In dynamical systems, we often take that dynamic to be a best response function, but with human actors we need not assume people best respond.

  16. Battle of Sexes Game EF CG EF CG

  17. Three Equilibria 1/4 3/4 3/4 1/4

  18. Unstable Mixed? 1/4 +e 3/4 - e EF 3/4 + 3e 3/4 - 3e CG

  19. Note the Implicit Assumption Our stability analysis assumed that Player 1 would best respond to Player 2’s tremble. However, the learning rule could be go to the mixed strategy equilibrium. If so, Player 1 would sit tight and Player 2 would return to the mixed strategy equilibrium.

  20. Empirical Foundations We need to have some understanding of how people learn and adapt to say anything about stability.

  21. Classes of Learning Rules Belief Based Learning Rules: People best respond given their beliefs about how other people play. Replicator Learning Rules: People replicate successful actions of others.

  22. Stability Results An extensive literature provides conditions (fairly week) under which the two learning rules have identical stability property. Synopsis: Learning rules do not matter

  23. Basins Question Do games exist in which best response dynamics and replicator dynamics produce different basins of attraction? Question: Does learning matter?

  24. Best Response Dynamics x = mixed strategy of Player 1 y = mixed strategy of Player 2 dx/dt = BR(y) - x dy/dt = BR(x) - y

  25. Replicator Dynamics dxi/dt = xi( i - ave) dyi/dt = yi( i - ave)

  26. Symmetric Matrix Game A B C A B C

  27. A A B B B C

  28. Best Response Basins A B C A B C A > B iff 60pA + 60pB + 30pC > 30pA + 70pB + 20pC A > C iff 60pA + 60pB + 30pC > 50pA + 25pB + 25pC

  29. Barycentric Coordinates C .7C + . 3B .4A + . 2B + .4C B A .5A + . 5B

  30. Best Responses C C A C B B B A

  31. Stable Equilibria C C A C B B B A

  32. Best Response Basins C C A A B B B A

  33. Replicator Dynamics Basins C C C A ? B B B A

  34. Replicator Dynamics C A C A B B B B A

  35. Recall: Basins Question Do games exist in which best response dynamics and replicator dynamics produce very different basins of attraction? Question: Does learning matter?

  36. Conjecture For any  > 0, There exists a symmetric matrix game such that the basins of attraction for distinct equilibria under continuous time best response dynamics and replicator dynamics overlap by less than 

  37. Collective Action Game SI Coop Pred Naive SI Coop Pred Naive

  38. Naïve Goes Away Pred Coop SI

  39. Basins on Face Pred SI Coop Coop SI

  40. Lots of Predatory Pred SI Coop Coop SI

  41. Best Response Pred SI Coop

  42. Replicator Pred SI Coop

  43. The Math dxi/dt = xi( i - ave) ave = 2xS + xC (1+NxC) dxc/dt = xc[(1+ NxC- - 2xS - xC (1+NxC)] dxc/dt = xc[(1+ NxC)(1- xC) - 2xS]

  44. Choose N > 1/ Assume xc >  dxc/dt = xc[(1+ NxC)(1- xC) - 2xS] dxc/dt >  [2(1- ) - 2(1- )] = 0 Therefore, xc always increases.

  45. Collective Action Game SI Coop Pred Naive SI Coop Pred Naive

  46. Theorem: In any symmetric matrix game for which best response and replicator dynamics attain different equilibria with probability one, there exists an action A that is both an initial best response with probability one and not an equilibrium.

  47. Convex Combinations Suppose the population learns using the following rule a (Best Response) + (1-a) Replicator

  48. Claim: There exists a class of games in which Best Response, Replicator Dynamics, and any fixed convex combination select distinct equilibria with probability one. Best Response -> A Replicator -> B a BR + (1-a) Rep -> C

  49. Learning/Adaptation Diversity Interactions/Epistasis Networks/Geography

More Related