1 / 28

The Fundamental Theorem of Game Theorey

The Fundamental Theorem of Game Theorey. 2.5 The Fundamental Theorem of Game Theory. For any 2-person zero-sum game there exists a pair (x*,y*) in S   T such that min {x*V . j : j=1,...,n} = max{min{xV .j : j=1,...,n}: x in S} = v 1 , max {V i . y* : i=1,...,m}

Download Presentation

The Fundamental Theorem of Game Theorey

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Fundamental Theorem of Game Theorey

  2. 2.5 The Fundamental Theorem of Game Theory • For any 2-person zero-sum game there exists a pair (x*,y*) in S T such that min {x*V. j : j=1,...,n} = max{min{xV.j: j=1,...,n}: x in S} = v1, max {Vi . y* : i=1,...,m} = min{max{Vi.y: i=1,...,m}: y in T} = v2, and v1=v2.

  3. Remember the following example? • What can you tell about the values of v1and v2?

  4. Proof. • It is convenient to assume that both v1 and v2 are strictly positive, namely that v1> 0 and v2> 0. This is a mere technicality, because we can always add to V a large constant without changing the nature of the game. (See later) • The plan is to show that the problems faced by the players can be expressed as LP problems and one is the dual of the other.

  5. By definition of v1, v1 := max {s(x): x in S} = max {min {xV. j : j=1,...,n}: x in S}(Theorem 1.4.1) • This is equivalent to: v1 = max u subject to xV.j ≥ u, j=1,...,n x in S. • Observe that this is an LP problem, namely

  6. v max u  1 u , x s . t . xV . u 0  ≥ 1 xV . u 0  ≥ 2 . . . . . . . . . . . . . . . . . xV . u 0 ≥  n x . . . x 1    1 m x , . . . , x 0 ≥ 1 m Observe that u is a decision variable!

  7. We can complete the proof using this form of the LP problem, but ...... it will not be “elegant”. Let us then beautify the formulation. • It is clear that if V is strictly positive, so is the optimal value of u. Thus, with no loss of generality, we can restrict the analysis to positive values of u, and divide the constraints by u. • This yields:

  8. Observe that maximizing u is equivalent to minimizing 1/u, thus the problem under consideration is equivalent to:

  9. If we then set x’:=x/u, we obtain

  10. Substituting the equality constraint for 1/u in the objective function, we obtain the following equivalent problem:

  11. If we thus let b = (1, 1, ... , 1) and c = (1, 1, ... , 1), we can rewrite the problem as follows:

  12. If we repeat the process for Player II, we discover that her problem is equivalent to

  13. By “inspection” we conclude that both problems are feasible and have optimal solutions. Thus duality theory tells us that v1 = v2. To obtain the optimal strategies from the solutions to these LP problems we have to multiply them by the optimal value of the objective function, that is, v1 or v2.

  14. Recipe Player I Player II

  15. 1.5.1 Example • Check: no saddle • Check whether v > 0 - IMPORTANT! • The two linear programming problems in this case are as follows:

  16. Player I Player II • We prefer Player II’s formulation. • Why?

  17. Final tableau: • y’* = (1/7, 1/7), v’2= 2/7; v2= 1/v’2= 7/2 • y* = v2y’* = • x’* = (5/28, 9/84) (why?) • x* = v2x’* = • Check the results for consistency.

  18. MIxED sTraTegIES A1 a3 A2 a1 a2

  19. We now know how to satisfy Principle I. • How about equilibrium? • Do we need to test equilibrium every time?

  20. Are the strategies yielding the optimal security levels in equilibrium? • YES!! • YIPEEE!!! • THANKS to COROLLARY 1.5.1 !!!! • 1.5.1 Corollary Let (x*,y*) be any element of ST such that v1 = s(x*)= (y*) = v2 . Then, this pair is in equilibrium.

  21. Proof: • Need to show that x*Vy ≥ x*Vy* ≥ xVy* for all x in S and y in T. • For (x*,y*) we have that s(x*) = v1 and (y*) = v2. • Since v1= v2, if follows that v1 = s(x*) = min{x*Vy: y in T} ≤ x*Vy* ≤ max{xVy*: x in S} = (y*) = v2 = v1 • So the ≤ must be = and min{x*Vy: y in T} = x*Vy* = max{xVy*: x in S} x*Vy ≥ min{x*Vy: y in T} = x*Vy* = max{xVy*: x in S} ≥ xVy* hence (x*,y*) is in equilibrium.

  22. The converse is also true 1.5.2 Theorem. Suppose that the strategy pair (x*, y*) is in equilibrium. Then this pair is optimal. • Proof: If (x*, y*) is in equilibrium, then xVy* ≤ x*Vy* ≤ x*Vy for all (x,y) in ST(definition). • Now, v1:= max{{min xVy: y in T}: x in S} • so by definition of max implies v1 ≥ min {x*Vy: y in T} = x*Vy*(by definition of equilibrium) • Similarly for v2 we obtain, v2 ≤ x*Vy*. • So v2 ≤ x*Vy* ≤ v1implies xVy* ≤ x*Vy* ≤ x*Vy for all (x,y) in ST

  23. So v2 ≤ x*Vy* ≤ v1 . • But by The Fundamental Theorem v1=v2 • thus v2 = x*Vy* = v1 i.e. (x*,y*) is an optimal pair. So for zero sum 2-person games, we have: an optimal pair is also an equilibrium pair AND an equilibrium pair is an optimal pair.

  24. Summary • For ANY 2-person zero-sum game, there exists a strategy pair (x*, y*) such that x* is optimal for Player I, y* is optimal for Player II, their respective security levels are equal and the pair is in equilibrium. • More good news: such a pair can be computed by the simplex method.

  25. Equilibrium!

More Related