1 / 30

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS. Lecture 9, 5/4/2005 University of Washington, Department of Electrical Engineering Spring 2005 Instructor: Professor Jeff A. Bilmes. Game Playing & Adversarial Search. Material. Reading: Read chapters 6 Read chapter 7 (for Monday).

ataret
Download Presentation

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 9, 5/4/2005 University of Washington, Department of Electrical Engineering Spring 2005 Instructor: Professor Jeff A. Bilmes EE562

  2. Game Playing &Adversarial Search EE562

  3. Material • Reading: Read chapters 6 • Read chapter 7 (for Monday). • expect new homework on Monday (this will be a long one). • Today’s lecture covers chapter 6. • Announcements: Still working on makeup class times. Send me email if you haven’t already for Friday schedules. EE562

  4. Outline • Optimal decisions • α-β pruning • Imperfect, real-time decisions EE562

  5. Games vs. search problems • "Unpredictable" opponent  solution is a strategy specifying a move for every possible opponent reply • Time limits  unlikely to find goal, must approximate • History: • Computer considers possible lines of play (Babbage, 1846) • Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) • Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) • First Chess Program (Turing, 1951) • Machine learning to improve evaluation accuracy (Samuel, 1952-57) • Pruning to allow deeper search (McCarthy, 1956) EE562

  6. Types of Games EE562

  7. Game tree (2-player, deterministic, turns) EE562

  8. Minimax • Perfect play for deterministic games • Idea: choose move to position with highest minimax value = best achievable payoff against best play • E.g., 2-ply game (ply == move by one player): EE562

  9. Minimax algorithm EE562

  10. Properties of minimax • Complete? Yes, if tree is finite. Finite strategy can exist even if tree is infinite. • Optimal? Yes (against an optimal opponent). • for sub-optimal opponent, better strategies might exist. • Time complexity? O(bm) • Space complexity? O(bm) (depth-first exploration) • For chess, b ≈ 35, m ≈100 for "reasonable" games exact solution completely infeasible. • But do we need to explore every path? EE562

  11. α-β pruning example EE562

  12. α-β pruning example EE562

  13. α-β pruning example EE562

  14. α-β pruning example EE562

  15. α-β pruning example EE562

  16. α is the value of the best (i.e., highest-value) choice found so far at any choice point along the path for max If v is worse than α, max will avoid it  prune that branch Define β similarly for min Why is it called α-β? EE562

  17. The α-β algorithm EE562

  18. The α-β algorithm EE562

  19. Properties of α-β • Pruning does not affect final result • Good move ordering improves effectiveness of pruning • With "perfect ordering," time complexity = O(bm/2) doubles the solvable tractable depth of search • A simple example of the value of reasoning about which computations are relevant (a form of metareasoning) • For chess, 3550 is still impossible! EE562

  20. Resource limits Standard approach: • cutoff test: use a cutoff-test rather than a terminal-test. e.g., depth limit (perhaps add quiescence search) quiescence == stop when nothing is going to make drastic changes (e.g., taking a queen) • evaluation function: use Eval instead of Utility = eval estimates desirability of position (but is a heuristic, since the exact true evaluation requires a full search) Suppose we have 100 secs to move, can explore 104 nodes/sec106nodes per move ¼ 358/2  - reaches depth 8, relatively good chess program EE562

  21. Evaluation functions • For chess, typically linear weighted sum of features Eval(s) = w1 f1(s) + w2 f2(s) + … + wn fn(s) e.g., w1 = 9 with f1(s) = (number of white queens) – (number of black queens), etc. EE562

  22. Exact values don’t’ matter Behavior is preserved under any monotonic transformation of Eval function. Only the order matters: payoff in deterministic games acts as an ordinal utility function (in game theory) EE562

  23. Cutting off search MinimaxCutoff is identical to MinimaxValue except • Terminal? is replaced by Cutoff? • Utility is replaced by Eval Does it work in practice? bm = 106, b=35  m=4 4-ply lookahead is a hopeless chess player! • 4-ply ≈ human novice • 8-ply ≈ typical PC, human master • 12-ply ≈ Deep Blue, Kasparov EE562

  24. Deterministic games in practice • Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used a precomputed endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 444 billion positions. • Chess: Deep Blue defeated human world champion Garry Kasparov in a six-game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply. • Othello: human champions refuse to compete against computers, who are too good. • Go: human champions refuse to compete against computers, who are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves. EE562

  25. Non-Deterministic games: backgammon EE562

  26. Non-Deterministic games in general • In nondeterministic games, chance is introduced by dice, card-shuffling, or coin flipping. In these cases, we have search nodes for “chance” which can be seen as a random player in the game. In the coin case: EE562

  27. Algorithm for non-deterministic games • Expectedminimax gives perfect play • Just like Minimax, except we also handle chance nodes. EE562

  28. non-deterministic games in practice • Dice rolls increase b: 21 possible rolls with 2 dice • Backgammon ¼ 20 legal moves (can be 6000 with 1-1 roll). • depth 4 = 20(21*20)3¼ 1.2 £ 109 • As depth increases, probability of reaching given node shrinks  value of lookahead is diminished • - pruning much less effective. • TDGammon: uses depth-2 search + very good Eval (learned automatically), world champion level EE562

  29. Exact values DO matter. • Behavor is preserved only by positive linear transformation of Eval • Hence, Eval should be proportional to expected payoff. EE562

  30. Summary • Games are fun to work on! • They illustrate several important points about AI • perfection is unattainable  must approximate • good idea to think about what to think about EE562

More Related