1 / 22

220 likes | 221 Views

CS 236501 Introduction to AI. Tutorial 5 Adversary Search. Agenda. Introduction: Why games? Assumptions Minimax algorithm General idea Minimax with limited depth Alpha-Beta search Pruning Search routine Example Enhancements to the algorithm. Why Games?. Games are fun

Download Presentation
## CS 236501 Introduction to AI

**An Image/Link below is provided (as is) to download presentation**
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.
Content is provided to you AS IS for your information and personal use only.
Download presentation by click this link.
While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

**CS 236501Introduction to AI**Tutorial 5 Adversary Search**Agenda**• Introduction: • Why games? • Assumptions • Minimax algorithm • General idea • Minimax with limited depth • Alpha-Beta search • Pruning • Search routine • Example • Enhancements to the algorithm Intro. to AI – Tutorial 5 – By Nela Gurevich**Why Games?**• Games are fun • Easy to measure results • Simple moves • Big search spaces Examples: Chess: Deep Junior Checkers: Chinook, Nemesis Othello: Bill Backgammon: TDGammon Intro. to AI – Tutorial 5 – By Nela Gurevich**Assumptions**• Two-players game • Perfect information The knowledge available to each player is the same • Zero Sum The move good for a player is bad for his adversary Intro. to AI – Tutorial 5 – By Nela Gurevich**Game Trees**MIN MAX Win Win Loss Win Loss Draw Draw Intro. to AI – Tutorial 5 – By Nela Gurevich**The Minimax Algorithm**• e(v) if v is a terminal node • MM(v) = max{MM(succ)} v is a max node • min{MM(succ)} v is a min node • Where succ = successors(v) and • 1 if v is a WIN node • e(v) = 0 if v is a DRAW node • -1 if v is a LOSS node A problem: big branching factor, deep trees Intro. to AI – Tutorial 5 – By Nela Gurevich**Minimax search to limited depth**• Search the game tree to some search frontier d. • Compute a static evaluation function f to assess the strength values of nodes at that frontier. • Use the minimax rule to compute approximations of the strength values of the shallower nodes. • f(v) if d=0 or v is terminal MM(v,d) = max{MM(succ, d-1)} v is a max node • min{MM(succ, d-1)} v is a min node Intro. to AI – Tutorial 5 – By Nela Gurevich**10**αβSearch Shallow pruning Deep pruning MM 10≤ MM 10≤ 10 MM ≤ 5 5 MM ≤ 5 The node will not contribute to the max value of the father The node will not contribute to the max value of the ancestor 5 Intro. to AI – Tutorial 5 – By Nela Gurevich**αβProcedure**α : highest max among ancestors of a node β : lowest min among ancestors of a node First call: αβ(v, d, min/max, -∞, ∞) // The αβ procedure αβ(v, d, node-type, α, β) { If v is terminal, or d = 0 return f(v) Intro. to AI – Tutorial 5 – By Nela Gurevich**αβProcedure: MAX node**• if node-type = MAX • { • curr-max ← -infinity • loop for viє Succ(v) • { • board-val ← αβ(vi, d-1, min, α, β) • curr-max ← max(board-val, curr-max) • α ← max(curr-max, α) • if (curr-max ≥ β) // Bigger than lowest min • end loop • } • return curr-max • } Intro. to AI – Tutorial 5 – By Nela Gurevich**αβProcedure: MIN node**• if node-type = MIN • { • curr-min ← infinity • loop for viє Succ(v) • { • board-val ← αβ(vi, d-1, max, α, β) • curr-min ← min(board-val, curr-min) • β ← min(curr-min, β) • if (curr-min ≤ α) // Smaller than highest max • end loop • } • return curr-min • } • } // end of αβ Intro. to AI – Tutorial 5 – By Nela Gurevich**10**11 9 12 14 15 13 14 5 2 4 1 3 22 20 21 Game Tree Example MIN MAX Intro. to AI – Tutorial 5 – By Nela Gurevich**?**10 11 9 14 15 13 5 2 4 1 3 22 20 Stage 1 α = -∞ β = ∞ α = -∞ β = ∞ α = -∞ β = ∞ α = -∞ β = ∞ 12 14 21 Intro. to AI – Tutorial 5 – By Nela Gurevich**?**10 11 9 14 15 13 5 2 4 1 3 22 20 Stage 1 α = 10 β = ∞ 10 α = -∞ β = 10 α = 10 β = ∞ 10 12 14 21 Intro. to AI – Tutorial 5 – By Nela Gurevich**10**11 9 Stage 2 – Shallow Pruning α = -∞ β = 10 10 α = -∞ β = 10 α = 10 β = ∞ 10 9 α = -∞ β = 10 α = 10 β = 9 12 14 15 13 14 5 2 4 1 3 22 20 21 Intro. to AI – Tutorial 5 – By Nela Gurevich**10**11 9 12 14 15 13 14 5 2 4 1 3 22 20 21 Game Tree example contd. α = 10 β = ∞ 10 10 α = -∞ β = 10 14 α = 14 β = 10 14 α = -∞ β = 10 Intro. to AI – Tutorial 5 – By Nela Gurevich**10**11 9 12 14 15 13 14 5 2 4 1 3 22 20 21 Game Tree example contd. α = 10 β = ∞ α = 10 β = ∞ α = 10 β = ∞ α = 10 β = ∞ Intro. to AI – Tutorial 5 – By Nela Gurevich**10**11 9 12 14 15 13 14 5 2 4 1 3 22 20 21 Game Tree example contd. α = 10 β = ∞ 5 α = 10 β = 5 α = 10 β = ∞ Intro. to AI – Tutorial 5 – By Nela Gurevich**10**11 9 12 14 15 13 14 5 2 4 1 3 22 20 21 Game Tree example contd. 10 5 α = 10 β = 5 5 α = 10 β = ∞ 4 α = 10 β = 4 Intro. to AI – Tutorial 5 – By Nela Gurevich**αβFeatures**• Correctness: Minimax(v, d) = αβ(v, d, -∞, ∞) • Pruning: The values of the tree leaves and the search ordering determine the amount of pruning • For any given tree and any search ordering, there exists a sequence of values for the leaves, such that αβ prunes no leaf. • For any given tree there exists such ordering that αβ prunes the maximal possible number of leaves. • For randomly ordered trees αβ expands Θ(b(3/4d)) leaves • Pruning decreases the effective branching factor, and thus allows us to search game tree for greater depth Intro. to AI – Tutorial 5 – By Nela Gurevich**Iterative αβ**Perform αβ search to increasing depth beginning from some initial depth. • Useful when the time is bounded – when the time is over, the value computed during the previous iteration can be returned • The values computed during the previous iterations may be used to perform a heuristic ordering on the nodes of the tree to increase the pruning rate Intro. to AI – Tutorial 5 – By Nela Gurevich**The End**Intro. to AI – Tutorial 5 – By Nela Gurevich

More Related