Cs 8520 artificial intelligence l.jpg
This presentation is the property of its rightful owner.
Sponsored Links
1 / 62

CS 8520: Artificial Intelligence PowerPoint PPT Presentation


  • 160 Views
  • Uploaded on
  • Presentation posted in: General

CS 8520: Artificial Intelligence. Intelligent Agents and Search Paula Matuszek Fall, 2005. Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn based on Russell, aima.eecs.berkeley.edu/slides-pdf. Outline. Agents and environments Rationality

Download Presentation

CS 8520: Artificial Intelligence

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Cs 8520 artificial intelligence l.jpg

CS 8520: Artificial Intelligence

Intelligent Agents and Search

Paula Matuszek

Fall, 2005

Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn based on Russell, aima.eecs.berkeley.edu/slides-pdf.


Outline l.jpg

Outline

  • Agents and environments

  • Rationality

  • PEAS (Performance measure, Environment, Actuators, Sensors)

  • Environment types

  • Agent types

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Agents l.jpg

Agents

  • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

  • Human agent: eyes, ears, and other organs for sensors; hands,

  • legs, mouth, and other body parts for actuators

  • Robotic agent: cameras and infrared range finders for sensors;

  • various motors for actuators

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Agents and environments l.jpg

Agents and environments

  • The agent function maps from percept histories to actions:

    [f: P* A]

  • The agent program runs on the physical architecture to produce f

  • agent = architecture + program

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Vacuum cleaner world l.jpg

Vacuum-cleaner world

  • Percepts: location and contents, e.g., [A,Dirty]

  • Actions: Left, Right, Suck, NoOp

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


A vacuum cleaner agent l.jpg

A vacuum-cleaner agent

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Rational agents l.jpg

Rational agents

  • An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful

  • Performance measure: An objective criterion for success of an agent's behavior

  • E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Rational agents8 l.jpg

Rational agents

  • Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Rational agents9 l.jpg

Rational agents

  • Rationality is distinct from omniscience (all-knowing with infinite knowledge)

  • Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration)

  • An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Peas description of an agent s world l.jpg

PEAS: Description of an Agent's World

  • Performance measure: How do we assess whether we are doing the right thing?

  • Environment,: What is the world we are in?

  • Actuators: How do we affect the world we are in?

  • Sensors: How do we perceive the world we are in?

  • Together these specify the setting for intelligent agent design

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Peas taxi driver l.jpg

PEAS: Taxi Driver

  • Consider, e.g., the task of designing an automated taxi driver:

    • Performance measure: Safe, fast, legal, comfortable trip, maximize profits

    • Environment: Roads, other traffic, pedestrians, customers

    • Actuators: Steering wheel, accelerator, brake, signal, horn

    • Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Slide12 l.jpg

PEAS

  • Agent: Medical diagnosis system

  • Performance measure: Healthy patient, minimize costs, lawsuits

  • Environment: Patient, hospital, staff

  • Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)

  • Sensors: Keyboard (entry of symptoms, findings, patient's answers)

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Slide13 l.jpg

PEAS

  • Agent: Medical diagnosis system

  • Performance measure: Healthy patient, minimize costs, lawsuits

  • Environment: Patient, hospital, staff

  • Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)

  • Sensors: Keyboard (entry of symptoms, findings, patient's answers)

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Slide14 l.jpg

PEAS

  • Agent: Part-picking robot

  • Performance measure: Percentage of parts in correct bins

  • Environment: Conveyor belt with parts, bins

  • Actuators: Jointed arm and hand

  • Sensors: Camera, joint angle sensors

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Slide15 l.jpg

PEAS

  • Agent: Part-picking robot

  • Performance measure: Percentage of parts in correct bins

  • Environment: Conveyor belt with parts, bins

  • Actuators: Jointed arm and hand

  • Sensors: Camera, joint angle sensors

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Slide16 l.jpg

PEAS

  • Agent: Interactive English tutor

  • Performance measure: Maximize student's score on test

  • Environment: Set of students

  • Actuators: Screen display (exercises, suggestions, corrections)

  • Sensors: Keyboard

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Slide17 l.jpg

PEAS

  • Agent: Interactive English tutor

  • Performance measure: Maximize student's score on test

  • Environment: Set of students

  • Actuators: Screen display (exercises, suggestions, corrections)

  • Sensors: Keyboard

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Environment types l.jpg

Environment types

  • Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.

  • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic)

  • Episodic(vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Environment types19 l.jpg

Environment types

  • Static(vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does)

  • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.

  • Single agent (vs. multiagent): An agent operating by itself in an environment.

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Environment types20 l.jpg

Environment types

Chess with Chess without Taxi

a clocka clockdriving

Fully observable

Deterministic

Episodic

Static

Discrete

Single agent

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Environment types21 l.jpg

Environment types

Chess with Chess w/out Taxi

a clocka clockdriving

Fully observableYesYesNo

DeterministicStrategicStrategicNo

Episodic NoNoNo

Static SemiYes No

DiscreteYes YesNo

Single agentNoNoNo

  • The environment type largely determines the agent design

  • The simplest environment is fully observable, deterministic, episodic, static, discrete and single-agent.

  • The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Agent functions and programs l.jpg

Agent functions and programs

  • An agent is completely specified by the agent function mapping percept sequences to actions

  • One agent function (or a small equivalence class) is rational

  • Aim: find a way to implement the rational agent function concisely

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Table lookup agent l.jpg

Table-lookup agent

Function TABLE-DRIVEN_AGENT(percept) returns an action

append percept to the end of percepts

action  LOOKUP(percepts, table)

return action

  • Drawbacks:

    • Huge table

    • Take a long time to build the table

    • No autonomy

    • Even with learning, need a long time for table entries

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Agent types l.jpg

Agent types

  • Four basic types in order of increasing generality:

  • Simple reflex agents

  • Model-based reflex agents

  • Goal-based agents

  • Utility-based agents

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Simple reflex agents l.jpg

Simple reflex agents

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Simple reflex vacuum agent l.jpg

Simple reflex Vacuum Agent

function REFLEX-VACUUM-AGENT ([location, status]) return an action

if status == Dirty then return Suck

else if location == A then return Right

else if location == B then return Left

  • Observe the world, choose an action, implement action, done.

  • Problems if environment is not fully-observable.

  • Depending on performance metric, may be inefficient.

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Model based agents l.jpg

Model-Based Agents

  • Suppose moving has a cost?

  • If a square stays clean once it is clean, then this algorithm will be extremely inefficient.

  • A very simple improvement would be

    • Record when we have cleaned a square

    • Don’t go back once we have cleaned both.

  • We have built a very simple model.

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Reflex agents with state l.jpg

Reflex Agents with State

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Reflex agents with state29 l.jpg

Reflex Agents with State

  • More complex agent with model: a square can get dirty again.

    Function REFLEX_VACUUM_AGENT_WITH_STATE ([location, status]) returns an action. last-cleaned-A and last-cleaned-B initially declared = 100.

    Increment last-cleaned-A and last-cleaned-B.

    if status == Dirty then return Suck

    if location == A

    then

    set last-cleaned-A to 0

    if last-cleaned-B > 3 then return right else no-op

    else

    set last-cleaned-B to 0

    if last-cleaned-A > 3 then return left else no-op

  • The value we check last-cleaned against could be modified.

  • Could track how often we find dirt to compute value

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Model based reflex plus state l.jpg

Model-Based = Reflex Plus State

  • Maintain an internal model of the state of the environment

  • Over time update state using world knowledge

    • How the world changes

    • How actions affect the world

  • Agent can operate more efficiently

  • More effective than a simple reflex agent for partially observable environments

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Goal based agents l.jpg

Goal-based agents

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Goal based agent l.jpg

Goal-Based Agent

  • Agent has some information about desirable situations

  • Needed when a single action cannot reach desired outcome

  • Therefore performance measure needs to take into account "the future".

  • Typical model for search and planning.

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Utility based agents l.jpg

Utility-based agents

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Utility based agents34 l.jpg

Utility-Based Agents

  • Possibly more than one goal, or more than one way to reach it

  • Some are better, more desirable than others

  • There is a utility function which captures this notion of "better".

  • Utility function maps a state or sequence of states onto a metric.

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Learning agents l.jpg

Learning agents

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Learning agents36 l.jpg

Learning Agents

  • All agents have methods for selection actions.

  • Learning agents can modify these methods.

  • Performance element: any of the previously described agents

  • Learning element: makes changes to actions

  • Critic: evaluates actions, gives feedback to learning element

  • Problem generator: suggests actions

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Solving problems by searching l.jpg

Solving problems by searching

Chapter 3


Outline38 l.jpg

Outline

  • Problem-solving agents

  • Problem formulation

  • Example problems

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Problem solving agents l.jpg

Problem-solving agents

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Example romania l.jpg

Example: Romania

  • On holiday in Romania; currently in Arad.

  • Flight leaves tomorrow from Bucharest

  • Formulate goal:

    • be in Bucharest

  • Formulate problem:

    • states: various cities

    • actions: drive between cities

  • Find solution:

    • sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Example romania41 l.jpg

Example: Romania

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Problem types l.jpg

Problem types

  • Deterministic, fully observable single-state problem

    • Agent knows exactly which state it will be in; solution is a sequence

  • Non-observable sensorless problem (conformant problem)

    • Agent may have no idea where it is; solution is a sequence

  • Nondeterministic and/or partially observable contingency problem

    • percepts provide new information about current state

    • often interleave} search, execution

  • Unknown state space exploration problem

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Example vacuum world l.jpg

Example: vacuum world

  • Single-state, start in #5. Solution?

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Example vacuum world44 l.jpg

Example: vacuum world

  • Single-state, start in #5. Solution?[Right, Suck]

  • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Example vacuum world45 l.jpg

Example: vacuum world

  • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck]

  • Contingency

    • Nondeterministic: Suck may dirty a clean carpet

    • Partially observable: location, dirt at current location.

    • Percept: [L, Clean], i.e., start in #5 or #7Solution?

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Example vacuum world46 l.jpg

Example: vacuum world

  • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck]

  • Contingency

    • Nondeterministic: Suck may dirty a clean carpet

    • Partially observable: location, dirt at current location.

    • Percept: [L, Clean], i.e., start in #5 or #7Solution?[Right, if dirt then Suck]

Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


Single state problem formulation l.jpg

Single-state problem formulation

A problem is defined by four items:

  • initial state e.g., "at Arad"

  • actions or successor function S(x) = set of action–state pairs

    • e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }

  • goal test, can be

    • explicit, e.g., x = "at Bucharest"

    • implicit, e.g., Checkmate(x)

  • path cost (additive)

    • e.g., sum of distances, number of actions executed, etc.

    • c(x,a,y) is the step cost, assumed to be ≥ 0

  • A solution is a sequence of actions leading from the initial state to a goal state

  • Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Selecting a state space l.jpg

    Selecting a state space

    • Real world is absurdly complex

       state space must be abstracted for problem solving

    • (Abstract) state = set of real states

    • (Abstract) action = complex combination of real actions

      • e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc.

    • For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind"

    • (Abstract) solution =

      • set of real paths that are solutions in the real world

    • Each abstract action should be "easier" than the original problem

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Vacuum world state space graph l.jpg

    Vacuum world state space graph

    • States? Actions? Goal Test? Path Cost?

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Vacuum world state space graph50 l.jpg

    Vacuum world state space graph

    • states? integer dirt and robot location

    • actions? Left, Right, Suck

    • goal test? no dirt at all locations

    • path cost? 1 per action

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Example the 8 puzzle l.jpg

    Example: The 8-puzzle

    • states?

    • actions?

    • goal test?

    • path cost?

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Example the 8 puzzle52 l.jpg

    Example: The 8-puzzle

    • states? locations of tiles

    • actions? move blank left, right, up, down

    • goal test? = goal state (given)

    • path cost? 1 per move

      [Note: optimal solution of n-Puzzle family is NP-hard]

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Example robotic assembly l.jpg

    Example: robotic assembly

    • states?

    • actions?

    • goal test?

    • path cost?

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Example robotic assembly54 l.jpg

    Example: robotic assembly

    • states?: real-valued coordinates of robot joint angles parts of the object to be assembled

    • actions?: continuous motions of robot joints

    • goal test?: complete assembly

    • path cost?: time to execute

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Tree search algorithms l.jpg

    Tree search algorithms

    • Basic idea:

      • offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Tree search example l.jpg

    Tree search example

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Tree search example57 l.jpg

    Tree search example

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Tree search example58 l.jpg

    Tree search example

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Implementation general tree search l.jpg

    Implementation: general tree search

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Implementation states vs nodes l.jpg

    Implementation: states vs. nodes

    • A state is a (representation of) a physical configuration

    • A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth

    • The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Search strategies l.jpg

    Search strategies

    • A search strategy is defined by picking the order of node expansion. (e.g., breadth-first, depth-first)

    • Strategies are evaluated along the following dimensions:

      • completeness: does it always find a solution if one exists?

      • time complexity: number of nodes generated

      • space complexity: maximum number of nodes in memory

      • optimality: does it always find a least-cost solution?

    • Time and space complexity are measured in terms of

      • b: maximum branching factor of the search tree

      • d: depth of the least-cost solution

      • m: maximum depth of the state space (may be infinite)

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


    Summary l.jpg

    Summary

    • We will view our systems as agents.

    • An agent operates in a world which can be described by its Performance measure, Environment, Actuators, and Sensors.

    • A rational agent chooses actions which maximize its performance measure, given the information it has.

    • Agents range in complexity from simple reflex agents to complex utility-based agents.

    • Problem-solving agents search through a problem or state space for an acceptable solution.

    • The formalization of a good state space is hard, and critical to success. It must abstract the essence of the problem so that

      • It is easier than the real-world problem.

      • A solution can be found.

      • The solution maps back to the real-world problem and solves it.

    Paula Matuszek, CSC 8520, Fall 2005. Based on aima.eecs.berkeley.edu/slides-ppt/m2-agents.ppt


  • Login