peas medical diagnosis system n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
PEAS: Medical diagnosis system PowerPoint Presentation
Download Presentation
PEAS: Medical diagnosis system

Loading in 2 Seconds...

play fullscreen
1 / 54

PEAS: Medical diagnosis system - PowerPoint PPT Presentation


  • 152 Views
  • Uploaded on

PEAS: Medical diagnosis system. Performance measure Patient health, cost, reputation Environment Patients, medical staff, insurers, courts Actuators Screen display, email Sensors Keyboard/mouse. Environment types. Agent design. The environment type largely determines the agent design

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'PEAS: Medical diagnosis system' - ivan-cunningham


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
peas medical diagnosis system
PEAS: Medical diagnosis system
  • Performance measure
    • Patient health, cost, reputation
  • Environment
    • Patients, medical staff, insurers, courts
  • Actuators
    • Screen display, email
  • Sensors
    • Keyboard/mouse
agent design
Agent design
  • The environment type largely determines the agent design
    • Partially observable => agent requires memory (internal state), takes actions to obtain information
    • Stochastic => agent may have to prepare for contingencies, must pay attention while executing plans
    • Multi-agent => agent may need to behave randomly
    • Static=> agent has enough time to compute a rational decision
    • Continuous time => continuously operating controller
agent types
Agent types
  • In order of increasing generality and complexity
    • Simple reflex agents
    • Reflex agents with state
    • Goal-based agents
    • Utility-based agents
a reflex pacman agent in python
A reflex Pacman agent in Python

classTurnLeftAgent(Agent):

defgetAction(self, percept):

legal = percept.getLegalPacmanActions()

current = percept.getPacmanState().configuration.direction

if current == Directions.STOP: current = Directions.NORTH

left = Directions.LEFT[current]

ifleft in legal: returnleft

ifcurrent in legal: return current

ifDirections.RIGHT[current] in legal: returnDirections.RIGHT[current]

if Directions.LEFT[left] in legal: returnDirections.LEFT[left]

returnDirections.STOP

pacman agent contd
Pacman agent contd.
  • Can we (in principle) extend this reflex agent to behave well in all standard Pacman environments?
handling complexity
Handling complexity
  • Writing behavioral rules or environment models more difficult for more complex environments
  • E.g., rules of chess (32 pieces, 64 squares, ~100 moves)
      • ~100 000 000 000 000 000 000 000 000 000 000 000 000 pages as a state-to-state transition matrix (cf HMMs, automata)

R.B.KB.RPPP..PPP..N..N…..PP….q.pp..Q..n..n..ppp..pppr.b.kb.r

      • ~100 000 pages in propositional logic (cf circuits, graphical models)

WhiteKingOnC4@Move12  …

      • 1 page in first-order logic

x,y,t,color,pieceOn(color,piece,x,y,t)  …

summary
Summary
  • An agent interacts with an environmentthrough sensors and actuators
  • The agent function, implemented by an agent program running on a machine, describes what the agent does in all circumstances
  • PEAS descriptions define task environments; precise PEAS specifications are essential
  • More difficult environments require more complex agent designs and more sophisticated representations
cs 188 artificial intelligence

CS 188: Artificial Intelligence

Search

Instructor: Stuart Russell

]

today
Today
  • Agents that Plan Ahead
  • Search Problems
  • Uninformed Search Methods
    • Depth-First Search
    • Breadth-First Search
    • Uniform-Cost Search
agents that plan ahead
Agents that plan ahead
  • Planning agents:
    • Decisions based on predicted consequences of actions
    • Must have atransition model: how the world evolves in response to actions
    • Must formulate a goal
  • Spectrum of deliberativeness:
    • Generate complete, optimal plan offline, then execute
    • Generate a simple, greedy plan, start executing, replan when something goes wrong
search problems1
Search Problems
  • A searchproblem consists of:
    • A state space
    • For each state, a set

Actions(s) of allowable actions

    • A transition model Result(s,a)
    • A step cost function c(s,a,s’)
    • A start state and a goal test
  • A solution is a sequence of actions (a plan) which transforms the start state to a goal state

1

{N, E}

N

1

E

example travelling in romania
Example: Travelling in Romania
  • State space:
    • Cities
  • Actions:
    • Go to adjacent city
  • Transition model
    • Result(Go(B),A) = B
  • Step cost
    • Distance along road link
  • Start state:
    • Arad
  • Goal test:
    • Is state == Bucharest?
  • Solution?
what s in a state space
What’s in a State Space?

The real world state includes every last detail of the environment

  • Problem: Pathing
    • States: (x,y) location
    • Actions: NSEW
    • Transition model: update location
    • Goal test: is (x,y)=END
  • Problem: Eat-All-Dots
    • States: {(x,y), dot booleans}
    • Actions: NSEW
    • Transition model: update location and possibly a dot boolean
    • Goal test: dots all false

A search stateabstracts away details not needed to solve the problem

MN states

MN2MN states

quiz safe passage
Quiz: Safe Passage
  • Problem: eat all dots while keeping the ghosts perma-scared
  • What does the state space have to specify?
    • (agent position, dot booleans, power pellet booleans, remaining scared time)
state space graphs
State Space Graphs
  • State space graph: A mathematical representation of a search problem
    • Nodes are (abstracted) world configurations
    • Arcs represent transitions resulting from actions
    • The goal test is a set of goal nodes (maybe only one)
  • In a state space graph, each state occurs only once!
  • We can rarely build this full graph in memory (it’s too big), but it’s a useful idea
search trees
Search Trees
  • A search tree:
    • A “what if” tree of plans and their outcomes
    • The start state is the root node
    • Children correspond to possible action outcomes
    • Nodes show states, but correspond to PLANS that achieve those states
    • For most problems, we can never actually build the whole tree

This is now / start

“N”, 1.0

“E”, 1.0

Possible futures

quiz state space graphs vs search trees
Quiz: State Space Graphs vs. Search Trees

How big is its search tree (from S)?

Consider this 4-state graph:

S

a

b

a

S

G

G

b

G

a

b

G

a

b

G

Important: Lots of repeated structure in the search tree!

searching with a search tree
Searching with a Search Tree
  • Search:
    • Expand out potential plans (tree nodes)
    • Maintain a frontier of partial plans under consideration
    • Try to expand as few tree nodes as possible
general tree search
General Tree Search

functionTREE-SEARCH(problem) returnsa solution, or failure

initialize the frontier using the initial state of problemloop do

ifthe frontier is empty then return failure choose a leaf node and remove it from the frontierifthe node contains a goal state then return the corresponding solution

expand the chosen node, adding the resulting nodes to the frontier

  • Important ideas:
    • Frontier
    • Expansion
    • Exploration strategy
  • Main question: which frontier nodes to explore?
depth first search1
Depth-First Search

G

a

c

b

a

c

b

e

d

f

e

d

f

S

h

S

h

e

e

p

d

p

r

p

q

q

q

h

h

r

r

b

c

p

p

q

q

f

f

a

a

q

q

c

c

G

G

a

a

Strategy: expand a deepest node first

Implementation: Frontier is a LIFO stack

r

search algorithm properties1
Search Algorithm Properties
  • Complete: Guaranteed to find a solution if one exists?
  • Optimal: Guaranteed to find the least cost path?
  • Time complexity?
  • Space complexity?
  • Cartoon of search tree:
    • b is the branching factor
    • m is the maximum depth
    • solutions at various depths
  • Number of nodes in entire tree?
    • 1 + b + b2 + …. bm = O(bm)

1 node

b

b nodes

b2 nodes

m tiers

bm nodes

depth first search dfs properties
Depth-First Search (DFS) Properties
  • What nodes does DFS expand?
    • Some left prefix of the tree.
    • Could process the whole tree!
    • If m is finite, takes time O(bm)
  • How much space does the frontier take?
    • Only has siblings on path to root, so O(bm)
  • Is it complete?
    • m could be infinite, so only if we prevent cycles (more later)
  • Is it optimal?
    • No, it finds the “leftmost” solution, regardless of depth or cost

1 node

b

b nodes

b2 nodes

m tiers

bm nodes

breadth first search1
Breadth-First Search

G

a

c

b

e

d

f

S

S

e

e

p

h

d

Search

Tiers

q

h

h

r

r

b

c

p

r

q

p

p

q

q

f

f

a

a

q

q

c

c

G

G

a

a

Strategy: expand a shallowest node first

Implementation: Frontier is a FIFO queue

breadth first search bfs properties
Breadth-First Search (BFS) Properties
  • What nodes does BFS expand?
    • Processes all nodes above shallowest solution
    • Let depth of shallowest solution be s
    • Search takes time O(bs)
  • How much space does the frontier take?
    • Has roughly the last tier, so O(bs)
  • Is it complete?
    • s must be finite if a solution exists, so yes!
  • Is it optimal?
    • Only if costs are all 1 (more on costs later)

1 node

b

b nodes

s tiers

b2 nodes

bs nodes

bm nodes

quiz dfs vs bfs1
Quiz: DFS vs BFS
  • When will BFS outperform DFS?
  • When will DFS outperform BFS?

[Demo: dfs/bfs maze water (L2D6)]

iterative deepening
Iterative Deepening
  • Idea: get DFS’s space advantage with BFS’s time / shallow-solution advantages
    • Run a DFS with depth limit 1. If no solution…
    • Run a DFS with depth limit 2. If no solution…
    • Run a DFS with depth limit 3. …..
  • Isn’t that wastefully redundant?
    • Generally most work happens in the lowest level searched, so not so bad!

b

finding a least cost path
Finding a least-cost path

BFS finds the shortest path in terms of number of actions.

It does not find the least-cost path. We will now cover

a similar algorithm which does find the least-cost path.

GOAL

a

2

2

c

b

3

2

1

8

e

2

d

3

f

9

8

2

START

h

4

2

1

4

15

p

r

q

uniform cost search1
Uniform Cost Search

G

a

c

b

e

d

f

S

h

S

e

e

p

d

p

r

q

q

h

h

r

r

b

c

p

p

q

q

f

f

a

a

q

q

c

c

G

G

a

a

2

Strategy: expand a cheapest node first:

Frontier is a priority queue (priority: cumulative cost)

8

1

2

2

3

2

9

8

1

1

15

0

1

9

3

16

11

5

17

4

11

Cost contours

7

6

13

8

11

10

uniform cost search ucs properties
Uniform Cost Search (UCS) Properties
  • What nodes does UCS expand?
    • Processes all nodes with cost less than cheapest solution!
    • If that solution costs C* and arcs cost at least  ,then the “effective depth” is roughly C*/
    • Takes time O(bC*/) (exponential in effective depth)
  • How much space does the frontier take?
    • Has roughly the last tier, so O(bC*/)
  • Is it complete?
    • Assuming best solution has a finite cost and minimum arc cost is positive, yes!
  • Is it optimal?
    • Yes! (Proof next lecture via A*)

b

c  1

c  2

C*/ “tiers”

c  3

uniform cost issues
Uniform Cost Issues
  • Remember: UCS explores increasing cost contours
  • The good: UCS is complete and optimal!
  • The bad:
    • Explores options in every “direction”
    • No information about goal location
  • We’ll fix that soon!

c  1

c  2

c  3

Start

Goal