1 / 26

Derivative Action Learning in Games

Derivative Action Learning in Games. Review of: J. Shamma and G. Arslan, “Dynamic Fictitious Play, Dynamic Gradient Play, and Distributed Convergence to Nash Equilibria,” IEEE Transactions on Automatic Control, Vol. 50, no. 3, pp. 312-327, March 2005. Overview.

greg
Download Presentation

Derivative Action Learning in Games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Derivative Action Learning in Games Review of: J. Shamma and G. Arslan, “Dynamic Fictitious Play, Dynamic Gradient Play, and Distributed Convergence to Nash Equilibria,” IEEE Transactions on Automatic Control, Vol. 50, no. 3, pp. 312-327, March 2005

  2. Overview • The authors propose an extension of fictitious play (FP) and gradient play (GP) in which strategy adjustment is a function of both the estimated opponent strategy and its time derivative • They demonstrate that when the learning rules are well-calibrated, convergence (or near-convergence) to Nash equilibria and asymptotic stability in the vicinity of equilibria can be achieved in games where static FP and GP fail to do so

  3. Game Setup This paper addresses a class of two-player games in which each player selects an action ai from a finite set at each instance of the game according to his mixed strategy pi and experiences utility U(p1,p2) equal to his expected payoff plus some additional utility associated with playing a mixed strategy. The purpose of the entropy term is not discussed by the authors, but it may be there to avoid converging to inferior local maxima in the utility function. Actual payoff depends on the combined player actions a1 and a2, each randomly selected according to the mixed strategies p1 and p2.

  4. Entropy function H(•) rewards mixed strategy Probability of selecting a1 in 2-dimensional strategy space

  5. Empirical Estimation and Best Response Player i’s strategy pi is, in general, mixed and exists within the simplex defined in mi space, where mi is the number of available actions to player i, by vertices corresponding to the available actions. Further, he adjusts his strategy by observing his opponent’s actions, formulating an empirical estimate of his opponent’s strategy q-i, and calculating the best mixed strategy in response. The adjusted strategy then will direct his next move.

  6. Best Response Function The best response is defined by the authors to be the mixed strategy that maximizes expected payoff. The authors claim (without proof) that, for  > 0, the utility-maximizing function is the logit function.

  7. FP in Continuous Time The remaining discussion of Fictitious Play is conducted in the continuous time domain. This allows the authors to describe the system dynamics in terms of smooth differential equations, and player actions are equivalent to their mixed strategies. The discrete-time dynamics are then interpreted as stochastic approximations of continuous-time solutions to the differential equations. This transformation is discussed in [Benaim, Hofbauer and Sorin 2003] and, presumably in [Benaim and Hirsch 1996], though I have not seen the latter myself.

  8. Achieving Nash Equilibrium Nash equilibria are reached at fixed points of the Best Response function. Convergence to fixed points occurs as the empirical frequency estimates converge to the actual strategies played.

  9. Derivative Action FP (DAFP): Idealized Case – Exact DAFP Exact DAFP uses directly measured first order forecast of opponent strategy in addition to observed empirical frequency in order to calculate Best Response

  10. Derivative Action FP (DAFP): Approximate DAFP Approximate DAFP uses estimated first order forecast of opponent strategy in addition to observed empirical frequency in order to calculate Best Response

  11. Exact DAFP,Special Case:  = 1 System Inversion – Each player seeks to play best response against current opponent strategy

  12. Convergence with Exact DAFP in Special Case ( = 1)

  13. Convergence with Noisy Exact DAFP in Special Case ( = 1) Suppose In words, the derivative of empirical frequencies is measurable to within some error. The authors prove that for any arbitrarily small >0, there exists a >0 such that if the measurement error (e1, e2) eventually remains within a -neighborhood of the origin, then the empirical frequencies (q1, q2) will remain within an -neighborhood of a Nash equilibrium. This suggests that, if a sufficiently accurate approximation of empirical frequency can be constructed, Approximate DAFP will converge to an arbitrary neighborhood of the Nash equilibria.

  14. Convergence with Approximate DAFP in Special Case ( = 1)

  15. Convergence with Approximate DAFP in Special Case ( = 1) (CONTINUED)

  16. Convergence with Approximate DAFP in Special Case ( = 1) (CONTINUED)

  17. Simulation Demonstration: Shapley Game Consider the 2-player 3×3 game invented by Lloyd Shapley to illustrate non-convergence of fictitious play in general. Standard FP in Discrete Time (top) and Continuous Time (bottom)

  18. Simulation Demonstration: Shapley Game Shapley Game with Approximate DAFP in Continuous Time with increasing : 1(top), 10(middle), 100(bottom) Another interesting thing here is that the players enter a correlated equilibrium, and their average payoff is higher than the expected Nash payoff. For the “modified” game, where player utility matrices are not identical, the strategies converge to theoretically unsupported values, illustrating a violation of the weak continuity requirement for i. This steady-state error can be corrected by setting the derivative gain according to the linearization-Routh-Hurwitz procedure noted earlier.

  19. GP Review for 2-player Games Gradient Play: Player i adjusts his strategy by observing his own empirical action frequency and adding the gradient of his Utility, as determined by his opponent’s empirical action frequency

  20. GP in Discrete Time

  21. GP in Continuous Time

  22. Achieving Nash Equilibrium Gradient Play:

  23. Derivative Action Gradient Play Standard GP cannot converge asymptotically to completely mixed Nash equilibria because the linearized dynamics are unstable at mixed equilibria. Exact DAGP always enables asymptotic stability at mixed equilibria with proper selection of derivative gain. Under some conditions, Approximate DAGP also enables asymptotic stability near mixed equilibria. Approximate DAGP always ensures asymptotic stability in the vicinity of strict equilibria.

  24. DAGP Simulation:Modified Shapley Game

  25. Multiplayer Games Consider the 3-player Jordan game: The authors demonstrate that DAGP converges to the mixed Nash equilibrium.

  26. Jordan Game Demonstration

More Related