Loading in 5 sec....

Notes 6: Two-Player Games and SearchPowerPoint Presentation

Notes 6: Two-Player Games and Search

- By
**jana** - Follow User

- 396 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Note Set 4: Game playing' - jana

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Notes 6: Two-Player Games and Search

ICS 171 Winter 2001

Outline

- Computer programs which play 2-player games
- game-playing as search
- with the complication of an opponent

- General principles of game-playing and search
- evaluation functions, minimax principle
- alpha-beta-pruning, heuristic techniques

- Status of Game-Playing Systems
- in chess, checkers, backgammon, Othello, etc, computers routinely defeat leading world players

- Applications?
- think of “nature” as an opponent: economics, medicine, etc

- Reading: Nilsson Ch.II-12 ; R.&N. Ch.5 (5.1-5.3)

Game-Playing and AI

- Game-playing is a good problem for AI research:
- all the information is available
- i.e., human and computer have equal information

- game-playing is non-trivial
- need to display “human-like” intelligence
- some games (such as chess) are very complex
- requires decision-making within a time-limit
- more realistic than other search problems

- games are played in a controlled environment
- can do experiments, repeat games, etc: good for evaluating research systems

- can compare humans and computers directly
- can evaluate percentage of wins/losses to quantify performance

- all the information is available

Search and Game Playing

- Consider a board game
- e.g., chess, checkers, tic-tac-toe
- configuration of the board = unique arrangement of “pieces”
- each possible configuration = state in search space

- Statement of Game as a Search Problem
- States = board configurations
- Operators = legal moves
- Initial State = current configuration
- Terminal State (Goal) = winning configuration

Game Tree Representation

Computer

Moves

S

- New aspect to search problem
- there’s an opponent we cannot control
- how can we handle this?

Opponent

Moves

Computer

Moves

Possible Goal State

lower in Tree (winning situation for computer)

G

Complexity of Game Playing

- Imagine we could predict the opponent’s moves given each computer move
- How complex would search be in this case?
- worst case, it will be O(bd)
- Chess:
- b ~ 35 (average branching factor)
- d ~ 100 (depth of game tree for typical game)
- bd ~ 35100 ~10154 nodes!! (“only” about 1040 legal states)

- Tic-Tac-Toe
- ~5 legal moves, total of 9 moves
- 59 = 1,953,125
- 9! = 362,880 (Computer goes first)
- 8! = 40,320 (Computer goes second)

- well-known games can produce enormous search trees

Utility Functions

- Utility Function:
- defined for each terminal state in a game
- assigns a numeric value for each terminal state
- these numbers represent how “valuable” the state is for the computer
- positive for winning
- negative for losing
- zero for a draw

- Typical values from -infinity (lost) to +infinity (won) or [-1, +1].

Greedy Search with Utilities

- A greedy search strategy using utility functions
- Expand the search tree as far as the terminal states on each branch
- Generate utility functions for each board configuration
- Make the initial move that results in the board configuration with the maximum value
- but this ignores what the opponent might do!
- i.e. opponent’s moves are interleaved with the computer’s

Minimax Principle

- “Assume the worst”
- say we are two plays (in the tree) away from the terminal states
- high numbers favor the computer
- so we want to choose moves which maximize utility

- low numbers favor the opponent
- so they will choose moves which minimize utility

- Minimax Principle
- you (the computer) assume that the opponent will choose the minimizing move next (after your move)
- so you now choose the best move under this assumption
- i.e., the maximum (highest-value) option considering both your move and the opponent’s optimal move.

- we can generalize this argument more than 2 moves ahead: we can search ahead as far as we can afford.

Minimax Search Example

- Explore the tree as far as the terminal states
- Calculate utilities for the resulting board configurations
- The computer will make the move such that when the opponent makes his best move, the board configuration will be in the best position for the computer

Propagating Minimax Values up the Game Tree

- Starting from the leaves
- Assign a value to the parent node as follows
- Children are Opponent’s moves: Minimum of all immediate children
- Children are Computer’s moves: Maximum of all immediate children

- Assign a value to the parent node as follows

Deeper Game Trees

Minimax can be generalized to deeper game trees (than just 2 levels):

“propagate” utility values upwards in the tree

General Minimax Algorithm on a Game Tree

For each move by the computer

1. perform depth-first search as far as the terminal state

2. assign utilities at each terminal state

3. propagate upwards the minimax choices

if the parent is a minimizer (opponent)

propagate up the minimum value of the children

if the parent is a maximizer (computer)

propagate up the maximum value of the children

4. choose the move (the child of the current node) corresponding

to the maximum of the minimax values if the children

Note:

- minimax values are gradually propagated upwards as depth-first

search proceeds, i.e., minimax values propagate up the tree in

a “left-to-right” fashion

- minimax values for sub-tree are propagated upwards “as we go”,

so only O(bd) nodes need to be kept in memory at any time

Complexity of Minimax Algorithm

- Assume that all terminal states are at depth d
- Space complexity
- depth-first search, so O(bd)

- Time complexity
- branching factor b, thus, O(bd)

- The O(bd) time complexity is a major problem since the computer typically only has a finite amount of time to make a move,
- e.g., in chess, 2 minute time-limit, but b=35, d=50 !

- So the direct minimax algorithm is impractical in practice
- instead what about depth-limited search to depth m?
- but utilities are defined only at terminal states
- => need to know the “utility” of non-terminal states
- these are called heuristic/static evaluation functions

Idea of Static Evaluation Functions

- An Evaluation Function:
- provide an estimate of how good the current board configuration is for the computer.
- think of it as the expected utility averagedover all possible game outcomes from that position
- it reflects the computer’s chances of winning from that node
- but it must be easy to calculate, i.e., cannot involve searching the tree below the depth-limit node
- thus, we use various easily calculated heuristics, e.g.,
- Chess: Value of all white pieces - Value of all black pieces

- Typically, one figures how good it is for the computer, and how good it is for the opponent, and subtracts the opponent’s score from the computer’s

- Must agree with the utility function when calculated at terminal nodes

Minimax with Evaluation Functions

For each move by the computer

1. perform depth-first search with depth-limit m

2. assign evaluation functions at each state at depth m

3. propagate upwards the minimax choices

if the parent is a minimizer (opponent)

propagate up the minimum value of the children

if the parent is a maximizer (computer)

propagate up the maximum value of the children

4. choose the move (the child of the current node) corresponding

to the maximum of the minimax values if the children

- The same as general minimax, except
- only goes to depth m
- uses evaluation functions (estimates of utility)

- How would this algorithm perform at chess?
- could look ahead about 4 “ply” (pairs of moves)
- would be consistently beaten even by average chess players!

The Alpha Beta Principle: Marry Search Tree Generation with Position Evaluations

1

Computer's

-8

0

1

Opponent’s Moves

(min of evaluations)

3

4

1

2

0

-8

(X indicates ‘not evaluated’)

X

X

X

Alpha Beta Procedure Position Evaluations

- Depth first search of game tree, keeping track of:
- Alpha: Highest value seen so far on maximizing level
- Beta: Lowest value seen so far on minimizing level

- Pruning
- When Maximizing,
- do not expand any more sibling nodes once a node has been seen whose evaluation is smaller than Alpha

- When Minimizing,
- do not expand any sibling nodes once a node has been seen whose evaluation is greater than Beta

- When Maximizing,

Effectiveness of Alpha-Beta Search Position Evaluations

- Worst-Case
- branches are ordered so that no pruning takes place
- alpha-beta gives no improvement over exhaustive search

- Best-Case
- each player’s best move is the left-most alternative (i.e., evaluated first)
- in practice, performance is closer to best rather than worst-case

- In practice often get O(b(d/2)) rather than O(bd)
- this is the same as having a branching factor of sqrt(b),
- since (sqrt(b))d = b(d/2)
- i.e., we have effectively gone from b to square root of b

- e.g., in chess go from b ~ 35 to b ~ 6
- this permits much deeper search in the same amount of time
- makes computer chess competitive with humans!

- this is the same as having a branching factor of sqrt(b),

Iterative (Progressive) Deepening Position Evaluations

- In real games, there is usually a time limit T on making a move
- How do we take this into account?
- using alpha-beta we cannot use “partial” results with any confidence unless the full breadth of the tree has been searched
- So, we could be conservative and set a conservative depth-limit which guarantees that we will find a move in time < T
- disadvantage is that we may finish early, could do more search

- In practice, iterative deepening search (IDS) is used
- IDS runs alpha-beta search with an increasing depth-limit
- note: the “inner” loop is a full alpha-beta search with a specified depth limit m

- when the clock runs out we use the solution found at the previous depth limit

- IDS runs alpha-beta search with an increasing depth-limit

Heuristics and Game Tree Search Position Evaluations

- The Horizon Effect
- sometimes there’s a major “effect” (such as a piece being captured) which is just “below” the depth to which the tree has been expanded
- the computer cannot see that this major event could happen
- it has a “limited horizon”
- there are heuristics to try to follow certain branches more deeply to detect to such important events, i.e., “focusing attention”
- this helps to avoid catastrophic losses due to “short-sightedness”

- Heuristics for Tree Exploration
- it may be better to explore some branches more deeply in the allotted time
- various heuristics exist to identify “promising” branches

More on Evaluation Functions Position Evaluations

- Features are numeric characteristics of a board position, e.g.,
- feature 1 = f1 = number of white pieces
- feature 2 = f2 = number of black pieces
- feature 3 = f3 = f1/f2
- feature 4 = f4 = number of white bishops
- feature 5 = f5 = estimate of “threat” to white king

- Evaluation function is an estimate of a useful a board position is
- it is a function of the pieces on the board
- it is like the heuristic function in general search
- generally
- evaluation function = function(f1, f2, f3, ......) i.e., it is a function of individual features

Linear Evaluation Functions Position Evaluations

- A linear function of f1, f2, f3 is a weighted sum of f1, f2, f3....
- A linear evaluation function is = w1.f1 + w2.f2 + w3.f3 + ..... wn.fn
- where f1, f2, f3, are the features
- and w1, w2, w3, are the weights
- Idea:
- more important features get more weight

- The quality of play will depend directly on the quality of the evaluation function
- to build an evaluation function we have to
- 1. construct good features (using expert knowledge, heuristics)
- 2. pick good weights (can be learned)

- to build an evaluation function we have to

Learning the Weights in a Linear Evaluation Function Position Evaluations

- A linear evaluation function is = w1.f1 + w2.f2 + w3.f3 + ..... wn.fn
- How could we “learn” these weights?
- basic idea:
- play lots of games against an opponent
- for every move (or game) look at the error = true outcome - evaluation function
- if error is positive (underestimating)
- adjust weights to increase the evaluation function

- if error is zero do nothing
- if error is negative (overestimatng)
- adjust weights to decrease the evaluation function

- play lots of games against an opponent
- details on learning of weights will be taught later in quarter when we discuss learning algorithms

- basic idea:

Examples of Algorithms which Learn to Play Well Position Evaluations

- Checkers
- A. L. Samuel, “Some studies in machine learning using the game of checkers,” IBM Journal of Research and Development, 11(6):601-617, 1959.
- Learned by playing a copy of itself thousands of times
- using only an IBM 704 with 10,000 words of RAM, magnetic tape, and a clock speed of 1 kHz
- successful enough to compete well at human tournaments
- a computer performing full flawless search is expected soon

- Backgammon
- G. Tesauro and T. J. Sejnowski, “A parallel network that learns to play backgammon,” Artificial Intelligence, 39(3), 357-390, 1989.
- Also learns by playing copies of itself
- uses a non-linear evaluation function (a neural network)
- rates in the top 3 players in the world.

Computers can play GrandMaster Chess Position Evaluations

- “Deep Blue” (IBM)
- parallel processor, 32 nodes
- each node has 8 dedicated VLSI “chess chips”
- each chip can search 200 million configurations/second
- uses minimax, alpha-beta, heuristics: can search to depth 14
- memorizes starts, end-games
- power based on speed and memory: no common sense

- Kasparov v. Deep Blue, May 1997
- 6 game full-regulation chess match (sponsored by ACM)
- Kasparov lost the match (2.5 to 3.5)
- a historic achievement for computer chess: the first time a computer is the best chess-player on the planet

- Note that Deep Blue plays by “brute-force”: there is relatively little which is similar to human intuition and cleverness

Status of Computers in Other Games Position Evaluations

- Checkers/Draughts
- current world champion is Chinook, can beat any human
- uses alpha-beta search

- Othello
- computers can easily beat the world experts

- Backgammon
- system which learns is ranked in the top 3 in the world
- uses neural networks to learn from playing many many games against itself

- Go
- branching factor b ~ 360: very large!
- $2 million prize for any system which can beat a world expert

Web site of the Day Position Evaluations

- http://www.chess.ibm.com
- Describes Kasparov v. Deep Blue match in May 1997
- descriptions of the games
- quotes, background info on people involved
- expert commentary and analysis
- detailed information about Deep Blue
- essays about computer chess in general

Summary Position Evaluations

- Game playing is best modeled as a search problem
- Game trees represent alternate computer/opponent moves
- Evaluation functions estimate the quality of a given board configuration for each player
- Minimax is a procedure which chooses moves by assuming that the opponent will always choose the move which is best for them
- Alpha-Beta is a procedure which can prune large parts of the search tree and allow search to go deeper
- For many well-known games, computer algorithms based on heuristic search match or out-perform human world experts.
- Reading: Nilsson Ch.II-12 ; R.&N. Ch.5 (5.1-5.3)
- Do alpha-beta pruning exercise on page 206 Nilsson

Download Presentation

Connecting to Server..