1 / 34

CS 8520: Artificial Intelligence

CS 8520: Artificial Intelligence. Solving Problems by Searching Paula Matuszek Spring, 2010. Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn based on Russell, aima.eecs.berkeley.edu/slides-pdf. Diagrams are based on AIMA.

annick
Download Presentation

CS 8520: Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 8520: Artificial Intelligence Solving Problems by Searching Paula Matuszek Spring, 2010 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn based on Russell, aima.eecs.berkeley.edu/slides-pdf. Diagrams are based on AIMA. CSC 8520 Spring 2010. Paula Matuszek

  2. Search • The basic concept of search views the state space as a search tree • Initial state is the root node • Each possible action leads to a new node defined by the transition model • Some nodes are identified as goals • Search is the process of expanding some portion of the tree in some order until we get to a goal node • The strategy we use to choose the order to expand nodes defines the type of search CSC 8520 Spring 2010. Paula Matuszek

  3. Search in an Adversarial Environment • Iterative deepening and A* useful for single-agent search problems • What if there are TWO agents? • Goals in conflict: • Adversarial Search • Especially common in AI: • Goals in direct conflict • IE: GAMES. CSC 8520 Spring 2010. Paula Matuszek

  4. Games vs. search problems • "Unpredictable" opponent  specifying a move for every possible opponent reply • Time limits  unlikely to find goal, must approximate • Efficiency matters a lot • HARD. • In AI, typically "zero sum": one player wins exactly as much as other player loses. CSC 8520 Spring 2010. Paula Matuszek

  5. Types of games Deterministic Stochastic Perfect Info Chess Monopoly Checkers Backgammon Othello Tic-Tac-Toe Imperfect Info Battleship Bridge Poker Scrabble CSC 8520 Spring 2010. Paula Matuszek

  6. Tic-Tac-Toe • Tic Tac Toe is one of the classic AI examples. Let's play some. • A Tic Tac Toe game: • http://ostermiller.org/calc/tictactoe.html • Try it, at various levels of difficulty. • What kind of strategy are you using? • What kind does the computer seem to be using? • Did you win? Lose? CSC 8520 Spring 2010. Paula Matuszek

  7. Problem Definition • Formally define a two-person game as: • Two players, called MAX and MIN. • Alternate moves • At end of game winner is rewarded and loser penalized. • Game has • Initial State: board position and player to go first • Successor Function: returns (move, state) pairs • All legal moves from the current state • Resulting state • Terminal Test • Utility function for terminal states. • Initial state plus legal moves define game tree. CSC 8520 Spring 2010. Paula Matuszek

  8. Tic Tac Toe Game tree CSC 8520 Spring 2010. Paula Matuszek

  9. Optimal Strategies • Optimal strategy is sequence of moves leading to desired goal state. • MAX's strategy is affected by MIN's play. • So MAX needs a strategy which is the best possible payoff, assuming optimal play on MIN's part. • Determined by looking at MINIMAX value for each node in game tree. CSC 8520 Spring 2010. Paula Matuszek

  10. Minimax • Perfect play for deterministic games • Idea: choose move to position with highest minimax value = best achievable payoff against best play • E.g., 2-ply game: CSC 8520 Spring 2010. Paula Matuszek

  11. Minimax algorithm CSC 8520 Spring 2010. Paula Matuszek

  12. Properties of minimax • Complete? Yes (if tree is finite) • Optimal? Yes (against an optimal opponent) • Time complexity? O(bm) • Space complexity? O(bm) (depth-first exploration) • For chess, b ≈ 35, m ≈100 for "reasonable" games exact solution completely infeasible • Even tic-tac-toe is much too complex to diagram here, although it's small enough to implement. CSC 8520 Spring 2010. Paula Matuszek

  13. Pruning the Search • “If you have an idea that is surely bad, don't take the time to see how truly awful it is.” -- Pat Winston • Minimax exponential with # of moves; not feasible in real-life • But we can PRUNE some branches. • Alpha-Beta pruning • If it is clear that a branch can't improve on the value we already have, stop analysis. CSC 8520 Spring 2010. Paula Matuszek

  14. α-β pruning example CSC 8520 Spring 2010. Paula Matuszek

  15. α-β pruning example CSC 8520 Spring 2010. Paula Matuszek

  16. α-β pruning example CSC 8520 Spring 2010. Paula Matuszek

  17. α-β pruning example CSC 8520 Spring 2010. Paula Matuszek

  18. α-β pruning example CSC 8520 Spring 2010. Paula Matuszek

  19. Properties of α-β • Pruning does not affect final result • Good move ordering improves effectiveness of pruning • With "perfect ordering," time complexity = O(bm/2) doubles depth of search which can be carried out for a given level of resources. • A simple example of the value of reasoning about which computations are relevant (a form of metareasoning) CSC 8520 Spring 2010. Paula Matuszek

  20. Why is it called α-β? • α is the value of the best (i.e., highest-value) choice found so far at any choice point along the path for max • If v is worse than α, max will avoid it  prune that branch • Define β similarly for min CSC 8520 Spring 2010. Paula Matuszek

  21. The α-β algorithm CSC 8520 Spring 2010. Paula Matuszek

  22. The α-β algorithm CSC 8520 Spring 2010. Paula Matuszek

  23. "Informed" Search • Alpha-Beta still not feasible for large game spaces. • Can we improve on performance with domain knowledge? • Yes -- if we have a useful heuristic for evaluating game states. • Conceptually analogous to A* for single-agent search. CSC 8520 Spring 2010. Paula Matuszek

  24. Resource limits Suppose we have 100 secs, explore 104 nodes/sec106nodes per move Standard approach: • cutoff test: e.g., depth limit (perhaps add quiescence search) • evaluation function = estimated desirability of position CSC 8520 Spring 2010. Paula Matuszek

  25. Evaluation function • Evaluation function or static evaluator is used to evaluate the “goodness” of a game position. • Contrast with heuristic search where the evaluation function was a non-negative estimate of the cost from the start node to a goal and passing through the given node • The zero-sum assumption allows us to use a single evaluation function to describe the goodness of a board with respect to both players. • f(n) >> 0: position n good for me and bad for you • f(n) << 0: position n bad for me and good for you • f(n) near 0: position n is a neutral position • f(n) = +infinity: win for me • f(n) = -infinity: win for you DesJardins: www.cs.umbc.edu/671/fall03/slides/c8-9_games.ppt CSC 8520 Spring 2010. Paula Matuszek

  26. Evaluation function examples • Example of an evaluation function for Tic-Tac-Toe: f(n) = [# of 3-lengths open for me] - [# of 3-lengths open for you] where a 3-length is a complete row, column, or diagonal • Alan Turing’s function for chess • f(n) = w(n)/b(n) where w(n) = sum of the point value of white’s pieces and b(n) = sum of black’s • Most evaluation functions are specified as a weighted sum of position features: f(n) = w1*feat1(n) + w2*feat2(n) + ... + wn*featk(n) • Example features for chess are piece count, piece placement, squares controlled, etc. • Deep Blue (which beat Gary Kasparov in 1997) had over 8000 features in its evaluation function DesJardins: www.cs.umbc.edu/671/fall03/slides/c8-9_games.ppt CSC 8520 Spring 2010. Paula Matuszek

  27. Cutting off search MinimaxCutoff is identical to MinimaxValue except • Terminal? is replaced by Cutoff? • Utility is replaced by Eval Does it work in practice? For chess: bm = 106, b=35  m=4 4-ply lookahead is a hopeless chess player! • 4-ply ≈ human novice • 8-ply ≈ typical PC, human master • 12-ply ≈ Deep Blue, Kasparov CSC 8520 Spring 2010. Paula Matuszek

  28. Deterministic games in practice • Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used a precomputed endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 444 billion positions. Now plays perfectly, using a combination of alpha-beta search and db of 39 trillion end positions. • Chess: Deep Blue defeated human world champion Garry Kasparov in a six-game match in 1997. Deep Blue searched 200 million positions per second, used very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply. Newer programs (Hydra and Rybka) may actually be better than any human player. • Othello: human champions refuse to compete against computers, who are too good. • Go: Just beginning to be good enough to play human champions. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves, but MoGo also uses a Monte Carlo Tree Search, one of the first demonstrations of its value. CSC 8520 Spring 2010. Paula Matuszek

  29. Non-Deterministic Games • Alpha-Beta search assumes that each player plays perfectly and we know what the possible moves are. • Suppose there is a chance element? • In Backgammon, for instance, each player rolls dice, which determine the possible moves to be made. CSC 8520 Spring 2010. Paula Matuszek

  30. Dealing with Chance • Add a “chance” layer of nodes between each player’s move that captures the possible rolls and their probability • Expand the minimax value to an “expected minimax value”: each node’s value is weighted by the probability of it occurring • Therefore the “best” move is not the one whose evaluation is highest, but the one with a high evaluation which is also likely to happen. CSC 8520 Spring 2010. Paula Matuszek

  31. Evaluation Functions for Chance • With a deterministic alpha-beta minimax is concerned only with which value of our evaluation function is bigger; it is sufficient for our evaluation function to order the possibilities correctly. • With an expectiminimax function, the size of the difference also matters; ideally, values indicate not only which move is better but how much better it is. • Ideally, the evaluation function is a positive linear transformation of the probability of winning from that position. CSC 8520 Spring 2010. Paula Matuszek

  32. Performance in Non-Deterministic Games • We have expanded our search tree to include layers containing all possible dice rolls -- what is the performance hit? • Basically, we now have O(bm bn) where n is the number of different dice rolls. • Ouch! • Can in some cases still find a cutoff; if the probability of a node is low enough we may not need to determine its value. • Requires that we know the bounds on the evaluation function. • Monte Carlo alternative CSC 8520 Spring 2010. Paula Matuszek

  33. Summary • Games are fun to work on! • They illustrate several important points about AI • perfection is unattainable  must approximate • good idea to think about what to think about CSC 8520 Spring 2010. Paula Matuszek

  34. Search Summary • For uninformed search, tradeoffs between time and space complexity, with iterative deepening often the best choice. • For non-adversarial informed search, A* usually the best choice; the better the heuristic, the better the performance. • For adversarial search, minimax with alpha-beta pruning is optimal where feasible • Adding an evaluation-function-based cutoff increases range of feasibility. • The better we can capture domain knowledge in the heuristic and evaluation functions, the better we can do. CSC 8520 Spring 2010. Paula Matuszek

More Related