1 / 43

Complete Search

General Game Playing Lecture 3. Complete Search. Michael Genesereth Spring 2010. Virtual Game Graph.

kay
Download Presentation

Complete Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. General Game Playing Lecture 3 Complete Search Michael Genesereth Spring 2010

  2. Virtual Game Graph Our code receives a GDL description from Gameserver. It provides you with (1) a subroutine for computing the initial state of the game, (2) subroutines for computing legal actions and terminal and goalfor given states, and (3) a subroutine for computing the state resulting from executing a given move in a given state. Your mission is to write a program to play an arbitrary game using these subroutines to access the game graph.

  3. Single Player Games

  4. Game Graph a b s2 s5 s8 a a b a a b d b b s1 s3 s6 s9 s11 d a c a a c b b a s7 s4 s10

  5. Depth First Search a b c d e f g h i j a b e f c g h d i j Advantage: Small intermediate storage Disadvantage: Susceptible to garden paths Disadvantage: Susceptible to infinite loops

  6. Breadth First Search a b c d e f g h i j a b c d e f g h i j Advantage: Finds shortest path Disadvantage: Consumes large amount of space

  7. Time Comparison Branching 2 and depth d and solution at depth k

  8. Time Comparison Analysis for branching b and depth d and solution at depth k.

  9. Space Comparison Total depth d and solution depth k.

  10. Iterative Deepening Run depth-limited search repeatedly, starting with a small initial depth, incrementing on each iteration, until success or run out of alternatives.

  11. Example a b c d e f g h i j a a b c d a b e f c g h d i j Advantage: Small intermediate storage Advantage: Finds shortest path Advantage: Not susceptible to garden paths Advantage: Not susceptible to infinite loops

  12. Time Comparison

  13. General Results Theorem [Korf]: The cost of iterative deepening search is b/(b-1) times the cost of depth-first search (where b is the branching factor). Theorem: The space cost of iterative deepening is no greater than the space cost for depth-first search.

  14. Game Graph I e b f g l a c h m n i d j k

  15. Game Graph II e f b g a b h c n c i j d k

  16. Bidirectional Search a m

  17. Multiple Player Games

  18. Game Graph aa ab s2 s5 s8 aa bb ab aa aa ab bb ab ab s1 s3 s6 s9 s11 ba aa ba ba ab bb bb aa aa s4 s7 s10

  19. Minimax Intuition - Select a move that is guaranteed to produce the highest possible return no matter what the opponents do. In the case of a one move game, a player should choose an action such that the value of the resulting state for any opponent action is greater than or equal to the value of the resulting state for any other move and opponent action. In the case of an multi-move game, minimax goes to the end of the game and “backs up” values.

  20. Bipartite Game Graph/Tree Max node Min node

  21. State Value • The value of a max node for player p is either the utility of that state if it is terminal or the maximum of all values for the min nodes that result from its legal actions. • valuep(x) = • utilityp(x) if x is a terminal state • max({valuep(minnodep(a, x)) | legalp(a, x)}) • The value of a min node is the minimum value that results from any legal opponent action. • valuep(n) = min({valuep(maxnodeP-p(b,n)) | legalP-p(b,n)})

  22. Bipartite Game Tree 3 3 3 3 7 7 3 1 3 5 7 7 5 3 1 1 2 3 4 5 6 7 8 8 7 6 5 4 3 2 1

  23. Best Action A player should choose an action a that leads to a min node with maximal value, i.e.the player should prefer action a to action a’ if and only if the following holds. valuep(minnodep(a, x)) valuep(minnodep(a’, x))

  24. Best Action function bestaction (role, state) {for action in findactions(role,state) var value = new Array(); value[action] = minscore(role, action, state); return first action that maximizes value[action]}

  25. Scores function maxscore (role, state) {if terminalp(state) return payoff(state); for action in findactions(role,state) var value = new Array(); value[action] = minscore(role, action, state); return max(value)} function minscore (role, action, state) {for move in findmoves(role,action,state) var value = new Array(); value[move] = maxscore(role,next(move,state))); return min(value)}

  26. Improvements If the minvalue for an action is determined to be 100, then there is no need to consider other actions. In computing the minvalue for a state if the value to the first player of an opponent’s action is 0, then there is no need to consider other possibilities. Alpha-Beta Search - Same as Minimax except with pruning at both min and max nodes. See Russell and Norvig for details.

  27. Implementational Details

  28. Matches Our code provides methods for computing a hash code and equality test for states based on data. You might consider maintaining a list of states or, better,building a hashtable to find duplicate states efficiently. classmatch () {varname; varrole; varroles; varstate; varstates; varstartclock; varplayclock}

  29. States Our code provides you for computing status, rewards, and nexts. You might consider storing these values in your state data structures so that you can retrieve without recomputing. Definition: Example: classstate () <state13 {vardata; data: [cell(1,1,b),…] vartheory; theory: […] varstatus; terminal varrewards; [25, 75] varnexts; [[a<state14>][b<state15>…] varscore} whatever>

  30. Managing Search Graph Representation Search Direction Search Strategy

  31. Game Description Language init(cell(1,1,b)) init(cell(1,2,b)) init(cell(1,3,b)) init(cell(2,1,b)) init(cell(2,2,b)) init(cell(2,3,b)) init(cell(3,1,b)) init(cell(3,2,b)) init(cell(3,3,b)) init(control(x)) legal(P,mark(X,Y)) :- true(cell(X,Y,b)) & true(control(P)) legal(x,noop) :- true(control(o)) legal(o,noop) :- true(control(x)) next(cell(M,N,P)) :- does(P,mark(M,N)) next(cell(M,N,Z)) :- does(P,mark(M,N)) & true(cell(M,N,Z)) & Z#b next(cell(M,N,b)) :- does(P,mark(J,K)) & true(cell(M,N,b)) & (M#J | N#K) next(control(x)) :- true(control(o)) next(control(o)) :- true(control(x)) terminal:- line(P) terminal:- ~open goal(x,100) :- line(x) goal(x,50) :- draw goal(x,0) :- line(o) goal(o,100) :- line(o) goal(o,50) :- draw goal(o,0) :- line(x) row(M,P) :- true(cell(M,1,P)) & true(cell(M,2,P)) & true(cell(M,3,P)) column(N,P) :- true(cell(1,N,P)) & true(cell(2,N,P)) & true(cell(3,N,P)) diagonal(P) :- true(cell(1,1,P)) & true(cell(2,2,P)) & true(cell(3,3,P)) diagonal(P) :- true(cell(1,3,P)) & true(cell(2,2,P)) & true(cell(3,1,P)) line(P) :- row(M,P) line(P) :- column(N,P) line(P) :- diagonal(P) open :-true(cell(M,N,b)) draw :- ~line(x) & ~line(o)

  32. State Representation X O X cell(1,1,x) cell(1,2,b) cell(1,3,b) cell(2,1,b) cell(2,2,o) cell(2,3,b) cell(3,1,b) cell(3,2,b) cell(3,3,x) control(black)

  33. Initial State init(cell(1,1,b)) cell(1,1,b) init(cell(1,2,b)) cell(1,2,b) init(cell(1,3,b)) cell(1,3,b) init(cell(2,1,b)) cell(2,1,b) init(cell(2,2,b)) cell(2,2,b) init(cell(2,3,b)) cell(2,3,b) init(cell(3,1,b)) cell(3,1,b) init(cell(3,2,b)) cell(3,2,b) init(cell(3,3,b)) cell(3,3,b) init(control(x)) control(x)

  34. Legality legal(W,mark(X,Y)) :- cell(1,1,x) true(cell(X,Y,b))  cell(1,2,b) true(control(W)) cell(1,3,b) cell(2,1,b) legal(white,noop) :- cell(2,2,o) true(cell(X,Y,b)) cell(2,3,b) true(control(black)) cell(3,1,b) cell(3,2,b) legal(black,noop) :- cell(3,3,x) true(cell(X,Y,b))  control(black) true(control(white)) Conclusions: legal(white,noop) legal(black,mark(1,2)) … legal(black,mark(3,2))

  35. Update Computation cell(1,1,x) cell(1,1,x) cell(1,2,b) cell(1,2,b) cell(1,3,b) cell(1,3,o) cell(2,1,b) black cell(2,1,b) cell(2,2,o) cell(2,2,o) cell(2,3,b) mark(1,3) cell(2,3,b) cell(3,1,b) cell(3,1,b) cell(3,2,b) cell(3,2,b) cell(3,3,x) cell(3,3,x) control(black) control(white) next(cell(M,N,o)) :- does(black,mark(M,N)) & true(cell(M,N,b))

  36. State X O X

  37. Game Tree X O X O X X X X X O X O X O X O O X X X X X O O X O O O X

  38. Game Graph X O X O X X O X O X X O X O X O X X O X O X O O O X X O X X O O X

  39. Game Tree Search X O X O X X X X O X O X O O X X X X X O O O O X X O X O O X X X X O X O X O X X X O O O O X O X O X X 100 X O X 100 O X 100 100 100 100 100 100

  40. Game Playing as Graph Search Our code receives a GDL description from Gameserver. Our code provides you with a subroutine for computing the initial state of the game, subroutines for computing legal actions and terminal and goal for given states, and a subroutine for computing the state resulting from executing a given action vector in a given state. Our code does not store any state information; it recomputes each time. Good - it does not exhaust memory. Bad - it recomputes. You should consider storing the results.

  41. Game Tree X O X O X X X X X O X O X O X O O X X X X X O O X O O O X X O X O O X X X X O X O X O X X X O O O O X O X O X X

More Related