1 / 25

# States and Search - PowerPoint PPT Presentation

States and Search. Core of intelligent behaviour. The simple problem solver. Restricted form of general agent: Figure 3.1 , p.61 function Simple-Problem-Solving-Agent( percept) returns action seq an action sequence, initially empty state some description of the current world state

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'States and Search' - habib

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### States and Search

Core of intelligent behaviour

Restricted form of general agent: Figure 3.1 , p.61

function Simple-Problem-Solving-Agent( percept) returns action

seq an action sequence, initially empty

state some description of the current world state

goal a goal, initially null

problem a problem formulation

state = Update-State(state, percept)

if seq is empty (ie – do search first time only)

goal = Formulate-Goal(state)

if (state==goal) return nil

problem = Formulate-Problem(state, goal) (performance)

seq = Search(problem)

action First(seq)

seq = Rest(seq)

return action

Creating a solution sequence by graph search

D Goforth - COSC 4117, fall 2006

• works by simulating the problem in internal representation and trying plans till a good one is discovered

• works in deterministic, static, single agent environments

• plan is made once and never changed

• works if plan is perfect – actions do what plan assumes no corrections to path are required

• works efficiently if space is not too large

D Goforth - COSC 4117, fall 2006

• states and state space – only relevant information in state representation

• actions - successor function costs

• and path cost (eg touring problem TSP)

• start state

• goal

state or criterion function of state(s)

D Goforth - COSC 4117, fall 2006

Faster processing

• minimization of number of states

• minimization of degree of branching of successor function (actions)

Smaller memory allocation

• large state spaces are generated/explored, not stored/traversed

D Goforth - COSC 4117, fall 2006

The fundamental method

for creating a plan

Is

SEARCH

D Goforth - COSC 4117, fall 2006

• start node and all possible actions, then pick another node...:

D Goforth - COSC 4117, fall 2006

• Node in search space

• current state

• reference to parent node on path

• action from parent to node

• path cost from start node (may be just path length)

D Goforth - COSC 4117, fall 2006

R space

L

L

L

L

L

L

L

R

R

L

L

R

L

R

L

R

L

L

R

L

R

L

L

L

L

R

L

L

L

L

R

R

R

R

L

R

R

R

L

R

R

L

R

R

L

R

R

Ff

L

R

R

L

L

R

L

R

L

L

R

R

3

R

R

R

R

L

R

R

R

Problem-solving agent – example

Node in search space

L

L

R

L

F

2

State

Action

Path length

D Goforth - COSC 4117, fall 2006

R space

L

L

L

L

L

L

L

R

R

L

L

R

L

R

L

R

L

L

R

L

R

L

L

L

L

R

L

L

L

L

R

R

R

R

L

R

R

L

R

R

L

R

R

L

R

R

L

L

R

L

R

L

L

R

R

R

R

R

R

L

R

R

R

Problem-solving agent – example

SPANNING TREE

Note why some state space edges are not traversed in the spanning tree

D Goforth - COSC 4117, fall 2006

• EXAMPLE: breadth first search in a binary tree

start state (visitedList)

fringe (openList)

current state

D Goforth - COSC 4117, fall 2006

startState

initial state of environment

nodes of search tree: contain state, parent, action, path cost

openList

collection of Nodes generated, not tested yet (fringe)

visitedList

collection of Nodes already tested and not the goal

action[n]

list of actions that can be taken by agent

goalStateFound(state) returns boolean

evaluate a state as goal

precondition(state, action) returns boolean

test a state for action

apply(node,action) returns node

apply action to get next state node

makeSequence(node) returns sequence of actions

generate plan as sequence of actions

algorithm search (startState, goalStateFound())

returns action sequence

openList = new NodeCollection(); // stack or queue or...

visitedList = new NodeCollection();

node = new Node(startState, null, null, 0 0);

openList.insert(node)

while ( notEmpty(openList) )

node = openList.get()

if (goalStateFound (node.state) ) // successful search

return makeSequence(node)

for k = 0..n-1

if (precondition(node.state, action[k])==TRUE)

if NOT(adjacentNode in openList OR visitedList)

visitedList.insert(node)

return null

• depth first

• iterative deepening search

• uniform cost search

D Goforth - COSC 4117, fall 2006

R space

R

R

L

Ff

3

variations on search algorithm

• openList is a queue

• depth first search

• openList is a stack

• (recursive depth first is equivalent)

shortest path vs resources required

State

Action

Path length

D Goforth - COSC 4117, fall 2006

• nodes on openList while search is at level k:

• bfs O(nk ) n is branching factor

• dfs O(nk)

• recursive dfs O(k)

• quality of solution path

• bfs always finds path with fewest actions

• dfs may find a longer path before a shorter one

D Goforth - COSC 4117, fall 2006

depth-limited dfs space

• use depth first search

with limited path length

eg

dfs(startNode,goalStateFound(),3)

uses dfs but only goes to level 3

D Goforth - COSC 4117, fall 2006

• variation on dfs to get best of both

• small openList of dfs

• finds path with fewest actions like bfs

• repeated searching is not a big problem!!!

D Goforth - COSC 4117, fall 2006

• search algorithm puts depth-limited dfs in a loop:

algorithm search (startState, goalStateFound())

Node node = null

depth = 0

while (node == null)

depth++

node = dfs(startState,goalStateFound(),depth)

return node

D Goforth - COSC 4117, fall 2006

uniform cost search space

• find best path when there is an action cost for each edge:

• a path of more edges may be better than a path of fewer edges:

• 12+8+9+4+10 (5 edges) is preferred to 35+18 (2 edges)

• variation on bfs

• openList is a priority queue ordered on path cost from start state

D Goforth - COSC 4117, fall 2006

openList is a priority queue ordered on path cost from start state

.

A(0)

C(2),B(4),D(8)

visited

current

open

A

8

4

2

B

C

D

D Goforth - COSC 4117, fall 2006

. space

A(0)

A(0)

C(2)

C(2),B(4),D(8)

B(4),E(5),D(8)

visited

current

open

visited

current

open

A

A

8

8

4

4

2

2

B

C

D

B

C

D

3

E

1

2

A(0),C(2)

B(4)

E(5),F(7),D(8),G(9)

visited

current

open

A

8

4

2

A(0),C(2), B(4)

E(5)

G(6),F(7),D(8), H(10)

B

C

D

visited

current

open

3

1

5

3

A

8

4

F

G

E

2

B

C

D

3

4

3

5

3

1

F

G

E

5

3

H

• openList structure

• time of testing for goal state

D Goforth - COSC 4117, fall 2006

• perceptions are incomplete representation of state

• dynamic environment – path of actions is not only cause of state change (eg games)

D Goforth - COSC 4117, fall 2006

• fully / partly observable

- is state known?

• deterministic / stochastic

- effect of action uncertain?

• sequential / episodic

- plan required/useful?

• static / dynamic

- state changes between action & perception and/or between perception & action

• discrete / continuous

- concurrent or sequential actions on state

• single- / multi-agent

dynamic environment;

possible communication, distributed AI

D Goforth - COSC 4117, fall 2006