IAIP Week 2
Download
1 / 103

Last Week - PowerPoint PPT Presentation


  • 261 Views
  • Updated On :

IAIP Week 2 Search I: Uninformed and Adversarial Search. Last Week. AI overview Agent Architectures. Hey days many “cognitive” tasks “solved” AI close to cognitive science. Subfields founded: Planning Vision Constraint Satisfaction. First industrial applications: Expert systems Neural nets.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Last Week' - erika


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Slide1 l.jpg

IAIP Week 2Search I: Uninformed and Adversarial Search


Last week l.jpg
Last Week

  • AI overview

  • Agent Architectures

Hey daysmany “cognitive” tasks “solved”AI close to cognitive science

Subfields founded:Planning VisionConstraint Satisfaction

First industrial applications:Expert systemsNeural nets

Machine learning

Agents

Widespread application

Maturing field

60

70

80

90

00

RMJ VeCoS ITU


This week search i l.jpg
This Week: Search I

  • Uninformed search [9:00-9:50, 10-10:50, 11-?]

    • State spaces

    • Search trees

    • General tree search

    • BFS, Uniform cost search, DFS, DLS, IDS, bidirectional search

    • General graph search

  • Adversarial search [?-11:50]

    • Game trees

    • Minimax

    • Alpha-beta pruning

RMJ VeCoS ITU


Uninformed search rn chapter 3 except 3 6 l.jpg
Uninformed searchRN Chapter 3 (Except 3.6)

RMJ VeCoS ITU


Search as a problem solving technique l.jpg
Search as a problem solving technique

  • Search is a last resort, if we have an analytical solution, we should apply it. Ex.

    • find minima of f(x,y) = x2 + y2 – x for -1 ≤x≤ 1 and -1 ≤y≤ 1

    • Solve an instance of Rubiks cube

  • What can we learn from humans?

RMJ VeCoS ITU


Problem space theory newell simon 72 l.jpg
Problem-space theory [Newell & Simon 72]

  • People’s problem solving behavior can be viewed as the production of knowledge states by the application of mental operators, moving from an initial state to a goal state

  • Operators encode legal moves

  • For a given problem there are a large number of alternative paths from initial state to goal state; the total set of such states is called a basic problem space

RMJ VeCoS ITU


Missionaries and cannibals problem l.jpg
Missionaries and Cannibals Problem

  • Task: Transfer 3 missionaries and 3 cannibals across a river in a boat

    • Max 2 people in the boat at any time

    • Someone must always accompany the boat to the other side

    • At any time there can not be more cannibals than missionaries left on one bank

MMMCCC

river

RMJ VeCoS ITU



Slide9 l.jpg

Hard: many options

Hard: moving away from goal state

Source: Eysenck & Kean 90

RMJ VeCoS ITU


Domain independent heuristics l.jpg
Domain Independent Heuristics

  • Means-end analysis

    • note the difference between the current state and the goal state;

    • create a sub-goal to reduce this difference;

    • Select an operator that will solve this sub-goal(continue recursively)

  • Anti-looping

AI exploits such heuristics in automated problem solving

RMJ VeCoS ITU


Informed and uninformed search l.jpg
Informed and Uninformed search

  • Uninformed search algorithms (the topic of this week):

    • Are only given the problem definition as input

  • Informed search algorithms (the topic of next week):

    • Are given information about the problem in addition to its definition (typical an estimate of the distance to a goal state)

RMJ VeCoS ITU


Search problem definition l.jpg
Search Problem Definition

  • A search problem consists of:

    • An initial states0

    • A successor functionSUCCESSOR-FN(S) that for a state s returns a set of action-state pairs <a,s’> where a is applicable in s and s’ is reached from s by applying a

    • A goal testG(s) that returns true iff s is a goal state

    • A step cost functionc(s,a,s’) that returns the step cost of taking action a to go from state s to state s’. (must be additive to define path cost)

  • A solution is a path from the initial state s0 to a goal stateg

  • An optimal solution is a solution with minimum path cost

1+2 form a state space

RMJ VeCoS ITU


Search problem definition13 l.jpg
Search Problem Definition

  • How can we define the state space as a graph G = (V,E)?

  • Why not use an explicitly defined search graph as input to the algorithms?

RMJ VeCoS ITU


Search problem definition14 l.jpg
Search Problem Definition

  • Notice

    • For most problems, the size of the state space grows exponentially with the size of the problem (e.g., with the number of objects that can be manipulated)

    • For these problems, it is intractable to compute an explicit graph representation of the search space

    • For that reason search-spaces are defined implicitly in AI.

    • Further, the complexity of an AI algorithm is normally given in terms of the size problem description rather than the size of the search space

RMJ VeCoS ITU


Selecting a state space l.jpg
Selecting a state space

1) Choose abstraction

  • Real world is absurdly complex

    • state space must be abstracted for problem solving

  • (Abstract) state = set of real states

  • (Abstract) action = complex combination of real actions

    • e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc.

  • For guaranteed realizability of actions, any real state "in Arad“ must get to some real state "in Zerind"

  • (Abstract) solution =

    • set of real paths that are solutions in the real world

  • Each abstract action should be "easier" than the original problem

    2) Define the elements of the search problem

RMJ VeCoS ITU



Ex romania route planning17 l.jpg
Ex: Romania Route Planning

  • Initial state: e.g., s0 = "at Arad"

  • Goal test function:G(s) : s= "at Bucharest"

  • Successor function: SUCCESSOR-FN(Arad) = {<Arad  Zerind, Zerind>, <Arad  Timisoara,Timisoara>, … }

  • Size of state space: 20

  • Edge cost function:

    • E.g., c(s,a,s’) = dist(s,s’)

RMJ VeCoS ITU


Ex missionaries and cannibals l.jpg
Ex: Missionaries and Cannibals

  • Initial state?

  • Goal test function?

  • Successor function definition?

  • Size of state space?

  • Cost function?

MMMCCC

river

RMJ VeCoS ITU


Ex missionaries and cannibals19 l.jpg
Ex: Missionaries and Cannibals

MMMCCC

river

MMMCCC

river

CC

MCCC

river

Sail(M,M)

Sail(C,C)

MM

Sail(M)

Sail(C)

Sail(M,C)

MMCCC

river

M

MMMCCC

river

MC

MMMCCC

river

C

RMJ VeCoS ITU


Ex missionaries and cannibals20 l.jpg
Ex: Missionaries and Cannibals

  • Initial state: <[{M,M,M,C,C,C,B},{}]>

  • Goal test function:G(s) :

  • Successor function:

    • Actions: Sail(M,M), Sail(M,C), Sail(C,C), Sail(M), Sail(C)

    • SUCCESSOR-FN(S) is defined by the rules:

    • Max 2 people in the boat at any time

    • Someone must always accompany the boat to the other side

    • At any time there can not be more cannibals than missionaries left on one bank

  • Size of state space: 42*2 = 32 (but only some of these are reachable)

  • Cost function: we want solutions with minimum # of steps, so c(s,a,s’) = 1

RMJ VeCoS ITU


Ex 8 puzzle l.jpg
Ex: 8-puzzle

  • Initial state?

  • Goal test function?

  • Successor function definition?

  • Size of state space?

  • Cost function?

1

2

3

4

5

6

7

8

1

RMJ VeCoS ITU


Ex 8 puzzle22 l.jpg
Ex: 8-puzzle

1

2

3

1

2

3

Up

Down

4

2

6

4

8

5

6

7

8

5

7

5

1

2

3

4

5

6

7

8

5

1

2

3

1

2

3

4

6

4

6

Left

Right

7

8

5

7

8

5

RMJ VeCoS ITU


Ex 8 puzzle23 l.jpg
Ex: 8-puzzle

  • Initial state: Any reachable state

  • Goal test function: s = <1,2,3,4,5,6,8,*>

  • Actions set: {Up, Down, Left, Right}

  • Successor function:

    • Given by the rules:

      1: Up (Down): applicable if some tile t above (below) *

      2: Left (Right): applicable if some tile t left (right) side of *

      3: The effect of actions is to swap t and *

  • Size of state space: 9!/2

  • Cost function: c(s,a,s’) = 1

RMJ VeCoS ITU


Ex 8 puzzle24 l.jpg
Ex: 8-puzzle

Q: why is the size of the state-space 9!/2 and not 9! ?

A: Only half of the possible configurations can reach the goal state

  • If the tiles are read from top to bottom, left to right, they form a permutation

  • e.g. the permutation of the state below is <1,2,3,4,5,7,8,6>

1

2

3

4

5

7

8

6

RMJ VeCoS ITU


Ex 8 puzzle25 l.jpg
Ex: 8-puzzle

  • Inversion: a pair of numbers contained in the permutation, for which the bigger one is before the smaller one

  • Number of inversions in <1,2,3,4,5,7,8,6>: 7

  • A permutation with an even (odd) number is called an even permutation (odd permutation)[or it said to have even (odd) parity]

RMJ VeCoS ITU


Ex 8 puzzle26 l.jpg
Ex: 8-puzzle

  • The actions in the 8-puzzle preserves the parity of the permutation

    • Left, right: obvious, no changes

    • Down (Up): tile moved 2 positions to the right (left) in the permutation:

      • both, smaller or larger: # of inversions increased or decreased with 2

      • one is smaller and one is larger: # of inversions is unchanged

        <A,B,C,D,E,F,G,H>

RMJ VeCoS ITU


Ex 4 queens l.jpg
Ex: 4-queens

  • Initial state?

  • Goal test function?

  • Successor function definition?

  • Size of state space?

  • Cost function?

Q

Q

Q

Q

RMJ VeCoS ITU


Ex 4 queens28 l.jpg
Ex: 4-queens

  • Initial state: no queens on the board

  • Goal test function: All queens on the board, none can attack each other

  • Successor function: add a queen to an empty square

  • Size of state space: 16*15*14*13 / 4! = 1820

  • Cost function: irrelevant!

    Can we do better?

RMJ VeCoS ITU


Tree search algorithms l.jpg
Tree Search Algorithms

  • Basic idea:

    • Assumes the state-space forms a tree (otherwise already-explored states may be regenerated)

    • Builds a search tree from the initial state and the successor function

RMJ VeCoS ITU


Tree search ex 4 queens l.jpg
Tree search ex.: 4-queens

  • A state is given by the assignment of 4 variables r1, r2, r3, and r4 denoting the row number of 4 queens placed in column 1 to 4

  • ri = 0 indicates that no column number has been assigned to queen i(e.g., queen i has not been placed on the board yet)

  • The queens are assigned in order r1 to r4

RMJ VeCoS ITU


Tree search ex 4 queens31 l.jpg
Tree search ex.: 4-queens

Initial state s0: root of the search tree

[0,0,0,0]

Search fringeor frontieror open list

RMJ VeCoS ITU


Tree search ex 4 queens32 l.jpg
Tree search ex.: 4-queens

Expansion of [0,0,0,0]

[0,0,0,0]

[1,0,0,0]

[2,0,0,0]

[3,0,0,0]

[4,0,0,0]

Children of [0,0,0,0]

The given search strategy chooses this leaf node on the fringe to expand next

RMJ VeCoS ITU


Tree search ex 4 queens33 l.jpg
Tree search ex.: 4-queens

Expansion of [2,0,0,0]

[0,0,0,0]

[1,0,0,0]

[2,0,0,0]

[3,0,0,0]

[4,0,0,0]

[2,1,0,0]

[2,2,0,0]

[2,3,0,0]

[2,4,0,0]

Etc…

RMJ VeCoS ITU


Tree search ex 4 queens34 l.jpg
Tree search ex.: 4-queens

Expansion of [2,0,0,0]Forward checking

[0,0,0,0]

[1,0,0,0]

[2,0,0,0]

[3,0,0,0]

[4,0,0,0]

X

X

X

X

X

[2,1,0,0]

[2,2,0,0]

[2,3,0,0]

[2,4,0,0]

X

Q

X

Etc…

Q

RMJ VeCoS ITU






States versus nodes l.jpg
States versus Nodes

  • A state is a (representation of) a physical configuration

  • A node is a data structure constituting part of a search tree includes state, parentnode, action, path costg(s), depth

  • The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states

RMJ VeCoS ITU


Search strategies l.jpg
Search strategies

  • A search strategy is defined by picking the order of node expansion (=sorting criteria of the fringe priority queue)

  • Strategies are evaluated along the following dimensions:

    • completeness: does it always find a solution if one exists?

    • time complexity: number of nodes generated

    • space complexity: maximum number of nodes in memory

    • optimality: does it always find a least-cost solution?

      What about soundness of a strategy?

  • Time and space complexity are measured in terms of

    • b: maximum branching factor of the search tree

    • d: depth of the least-cost solution

    • m: maximum depth of the state space (may be infinite)

RMJ VeCoS ITU


Uninformed search strategies l.jpg
Uninformed search strategies

  • Uninformed search strategies use only the information available in the problem definition

  • Breadth-first search (BFS)

  • Uniform-cost search

  • Depth-first search (DFS)

  • Backtracking search

  • Depth-limited search (DLS)

  • Iterative deepening search (IDS)

  • Bidirectional search

RMJ VeCoS ITU


Breadth first search l.jpg
Breadth-first search

  • Expand shallowest unexpanded node

  • Implementation:

    • fringe is a FIFO queue, i.e., new successors go at end

RMJ VeCoS ITU


Breadth first search43 l.jpg
Breadth-first search

  • Expand shallowest unexpanded node

  • Implementation:

    • fringe is a FIFO queue, i.e., new successors go at end

RMJ VeCoS ITU


Breadth first search44 l.jpg
Breadth-first search

  • Expand shallowest unexpanded node

  • Implementation:

    • fringe is a FIFO queue, i.e., new successors go at end

RMJ VeCoS ITU


Breadth first search45 l.jpg
Breadth-first search

  • Expand shallowest unexpanded node

  • Implementation:

    • fringe is a FIFO queue, i.e., new successors go at end

RMJ VeCoS ITU


Properties of breadth first search l.jpg
Properties of breadth-first search

  • Complete?

  • Time?

  • Space?

  • Optimal?

RMJ VeCoS ITU


Slide47 l.jpg

Properties of breadth-first search

b0

b1

bd

bd+1-b

RMJ VeCoS ITU


Properties of breadth first search48 l.jpg
Properties of breadth-first search

  • Complete?Yes (if b is finite)

  • Time?1+b+b2+b3+… +bd + b(bd-1) =

    (1-bd+2)/(1-b) – b = O(bd+1)

  • Space?O(bd+1) (keeps every node in memory)

  • Optimal? Yes (if cost = 1 per step), all nodes at depth i will be expanded before nodes at depth i+1, so an optimal solution will not be overlooked.

  • Space is the bigger problem (more than time)

RMJ VeCoS ITU


Uniform cost search l.jpg
Uniform-cost search

  • Expand least-cost unexpanded node

  • Implementation:

    • fringe = queue ordered by increasing path cost

  • Equivalent to breadth-first if step costs all equal

RMJ VeCoS ITU


Uniform cost search50 l.jpg
Uniform-cost search

g=0

75

140

118

g=140

g=118

g=75

RMJ VeCoS ITU


Uniform cost search51 l.jpg
Uniform-cost search

g=0

75

140

118

g=140

g=118

g=75

75

75

71

g=150

g=146

RMJ VeCoS ITU


Uniform cost search52 l.jpg
Uniform-cost search

  • Complete?

  • Time?

  • Space?

  • Optimal?

RMJ VeCoS ITU


Uniform cost search53 l.jpg
Uniform-cost search

b0

b1

RMJ VeCoS ITU


Uniform cost search54 l.jpg
Uniform-cost search

G=3

2

3

2

G=3

3

G=0

1

1

3

2

G=1

4

2

G=3

RMJ VeCoS ITU


Uniform cost search55 l.jpg
Uniform-cost search

G=3

2

3

2

G=3

3

G=2

G=0

1

1

G=3

3

2

G=1

4

2

G=3

RMJ VeCoS ITU


Uniform cost search56 l.jpg
Uniform-cost search

G=3

G=5

2

3

2

G=3

3

G=2

G=0

1

1

G=3

3

2

G=1

4

2

G=3

RMJ VeCoS ITU


Uniform cost search57 l.jpg
Uniform-cost search

  • Complete? Yes, if step cost ≥ ε

  • Time? # of nodes with g ≤ cost of optimal solution, where C* is the cost of the optimal solution

  • Space? # of nodes with g≤ cost of optimal solution,

  • Optimal? Yes – expanded node is the head of an optimal path from the initial state [thus when a goal node is expanded, the corresponding path is an optimal path from the initial state]

  • Close to Dijkstra’s algorithm, but

    • The search is stopped when a goal node is found

    • We may have several nodes of the same state instead of several cost updates of a state

RMJ VeCoS ITU


Depth first search l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search59 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search60 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search61 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search62 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search63 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search64 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search65 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search66 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search67 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search68 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Depth first search69 l.jpg
Depth-first search

  • Expand deepest unexpanded node

  • Implementation:

    • fringe = LIFO queue, i.e., put successors at front

RMJ VeCoS ITU


Properties of depth first search l.jpg
Properties of depth-first search

  • Complete?

  • Time?

  • Space?

  • Optimal?

RMJ VeCoS ITU


Properties of depth first search71 l.jpg
Properties of depth-first search

  • Complete? No: fails in infinite-depth spaces, spaces with loops

    • Modify to avoid repeated states along path

       complete in finite spaces

  • Time?O(bm): terrible if m is much larger than d

    • but if solutions are dense, may be much faster than breadth-first

  • Space?O(bm),i.e., linear space!

  • Optimal? No

  • Bactracking search: only one successor generated at a time rather than all successors. Thus, O(m) space

RMJ VeCoS ITU


Depth limited search l.jpg
Depth-limited search

= depth-first search with depth limit l,

i.e., nodes at depth lhave no successors

  • Recursive implementation:

RMJ VeCoS ITU


Properties of depth limited search l.jpg
Properties of depth-limited search

  • Complete?

  • Time?

  • Space?

  • Optimal?

RMJ VeCoS ITU


Properties of depth limited search74 l.jpg
Properties of depth-limited search

  • Complete? No: limit may be to small

  • Time?O(bl)

  • Space?O(bl)

  • Optimal? No

RMJ VeCoS ITU



Iterative deepening search l 0 l.jpg
Iterative deepening search l =0

RMJ VeCoS ITU


Iterative deepening search l 1 l.jpg
Iterative deepening search l =1

RMJ VeCoS ITU


Iterative deepening search l 2 l.jpg
Iterative deepening search l =2

RMJ VeCoS ITU


Iterative deepening search l 3 l.jpg
Iterative deepening search l =3

RMJ VeCoS ITU


Iterative deepening search80 l.jpg
Iterative deepening search

  • Number of nodes generated in a depth-limited search to depth d with branching factor b:

    NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

  • Number of nodes generated in an iterative deepening search to depth d with branching factor b:

    NIDS = (d+1)b0 + (d) b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd

  • For b = 10, d = 5,

    • NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111

    • NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

    • NBFS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 + 999,990 = 1,111,1000(BFS has to expand last layer, DLS and DFS don’t!)

  • Overhead of IDS compared with DLS = (123,456 - 111,111)/111,111 = 11%

RMJ VeCoS ITU


Properties of iterative deepening search l.jpg
Properties of iterative deepening search

  • Complete?Yes

  • Time?(d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)

  • Space?O(bd)

  • Optimal?Yes, if step cost = 1

RMJ VeCoS ITU


Bidirectional search l.jpg
Bidirectional Search

  • Complete? Yes, (if BFS used in both directions)

  • Time?O(bd/2)(if fringe membership check in O(1))

  • Space?O(bd/2)

  • Optimal? Yes, if step cost = 1, and BFS in both directions

s0

g

RMJ VeCoS ITU


Summary of algorithms l.jpg
Summary of algorithms

a: complete if b is finiteb: complete if step cost ≥εc: optimal if unit costd: if both directions use BFS

RMJ VeCoS ITU


Repeated states l.jpg
Repeated states

  • Failure to detect repeated states can turn a linear problem into an exponential one!

  • How?

RMJ VeCoS ITU


Repeated states85 l.jpg
Repeated states

  • Failure to detect repeated states can turn a linear problem into an exponential one!

RMJ VeCoS ITU


Graph search l.jpg
Graph search

  • BFS and uniform cost search still complete/optimal?

  • How can DFS, DLS, and IDS benefit from remembering previously visited states?

RMJ VeCoS ITU


Summary l.jpg
Summary

  • Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored

  • Variety of uninformed search strategies

  • Iterative deepening search uses only linear space and not much more time than other uninformed algorithms

RMJ VeCoS ITU


Adversarial search rn chapter 6 only 6 1 and 6 2 l.jpg
Adversarial searchRN Chapter 6 (only 6.1 and 6.2)

RMJ VeCoS ITU


Games vs search problems l.jpg
Games vs. search problems

  • "Unpredictable" opponent  specifying a move for every possible opponent reply

  • Time limits  unlikely to find goal, must approximate

RMJ VeCoS ITU



Minimax l.jpg
Minimax

  • Perfect play for deterministic games

  • Idea: choose move to position with highest minimax value = best achievable payoff against best play

  • E.g., 2-ply game:

RMJ VeCoS ITU


Minimax algorithm l.jpg
Minimax algorithm

RMJ VeCoS ITU


Properties of minimax l.jpg
Properties of minimax

  • Complete? Yes (if tree is finite)

  • Optimal? Yes (against an optimal opponent)

  • Time complexity?O(bm)

  • Space complexity?O(bm) (depth-first exploration)

  • For chess, b ≈ 35, m ≈100 for "reasonable" games exact solution completely infeasible

  • Minimax will just achieve more if MIN plays suboptimal, but even better strategies may exist in this case

RMJ VeCoS ITU


Pruning example l.jpg
α-β pruning example

RMJ VeCoS ITU


Pruning example95 l.jpg
α-β pruning example

RMJ VeCoS ITU


Pruning example96 l.jpg
α-β pruning example

RMJ VeCoS ITU


Pruning example97 l.jpg
α-β pruning example

RMJ VeCoS ITU


Pruning example98 l.jpg
α-β pruning example

RMJ VeCoS ITU


Properties of l.jpg
Properties of α-β

  • Pruning does not affect final result

  • Good move ordering improves effectiveness of pruning

  • With "perfect ordering," time complexity = O(bm/2)

    cuts b to

RMJ VeCoS ITU


Why is it called l.jpg

α is the value of the best (i.e., highest-value) choice found so far at any choice point along the path for max

If v is worse than α, max will avoid it

 prune that branch

Define β similarly for min

Why is it called α-β?

RMJ VeCoS ITU


The algorithm l.jpg
The α-β algorithm found so far at any choice point along the path for

RMJ VeCoS ITU


The algorithm102 l.jpg
The α-β algorithm found so far at any choice point along the path for

RMJ VeCoS ITU


Resource limits l.jpg
Resource limits found so far at any choice point along the path for

Suppose we have 100 secs, explore 104 nodes/sec106nodes per move

Standard approach:

  • cutoff test:

    e.g., depth limit

  • evaluation function

    = estimated desirability of position

RMJ VeCoS ITU


ad