1 / 139

Artificial Intelligence Chapter 02

Artificial Intelligence Chapter 02. Intelligent Agents. Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types. Agents. An agent is a system (entity) that can

adriennee
Download Presentation

Artificial Intelligence Chapter 02

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Chapter 02 Intelligent Agents

  2. Outline • Agents and environments • Rationality • PEAS (Performance measure, Environment, Actuators, Sensors) • Environment types • Agent types

  3. Agents • An agent is a system (entity) that can • perceive (observe) its environment through sensors and • act on the environment through actuators/effectors • The agent’s behavior is described by its function that maps percept to action. • A rational agent is an agent whose acts try to maximize some performance measure.

  4. Agents: Examples • Human agent: • sensors are eyes, ears, and other organs; • actuators (effectors) are hands, legs, mouth, and other body parts • Robotic agent: • sensors are cameras, sonar, lasers, radar and infrared range finders (agent of an agent?); • actuators are grippers, manipulators, motors • Other agents include softbots, thermostats, etc.

  5. Components of an AI System Sensors Actuators

  6. Agents and environments Agent Sensors Percepts Environment ? Actions Actuators

  7. Agents as Mappings • An agent can be seen as a mapping between percept sequences and actions. • The agentfunction maps from percept histories to actions: [f: P* A] • An agent program can be defined as an agent function which maps every possible percepts sequence to a possible action the agent can perform • The agentprogram runs on the physical system /architecture (such as a computer device with physical sensors and actuators) to produce f agent = architecture + program

  8. Agents as Mappings ☹ • An agent can be seen as a mapping between percept sequences and actions. [f: P* A] • The agent program runs on a physical architecture to produce f. • The less an agents relies on its built-in knowledge, as opposed to the current percept sequence, the more autonomous it is

  9. Summary: Definition of an Intelligent Agent • What is an Intelligent Agent? • An (intelligent) agent is an autonomous entity which: • perceives (observes) it environment through sensors and • acts rationally upon that environment using its actuators/effectors; • An agent gets percepts one at a time, and maps this percept sequence to actions. • It is rational (as defined in economics) for the agent directs its activity towards achieving goals • Intelligent agents may also learn or use knowledge to achieve their goals. • Another definition: An agent is a computer software system whose main • characteristics are situatedness, autonomy, adaptivity, and sociability. ☹

  10. Vacuum-cleaner world: An Agent • The vacuum agent: • Percepts: location and contents, e.g., [A, Dirty] • e.g., two locations: square A and B. • e.g., contents: There is dirt in the square • Actions: moveLeftorRight, Suck up the dirt, NoOp- do nothing

  11. A Vacuum-Cleaner Agent

  12. A Vacuum-Cleaner Agent functionREFLEX-VACCUM-AGENT( [location, status]) returnsaction ifstatus = Dirty then return Suck else if location = A then return Right else if location = B then return Left

  13. A vacuum-cleaner agent ☹ Its simple agent function is: • If the current square is dirty, then suck; otherwise, move to the other square. It’s a simple reflex agent because: (in an two-state vacuum environments. (Location, Dirty) or (Location, Clean)) • it selects actions on the basis of the current percept, ignoring the rest of the percept history. • its decision is based only on the current location and whether that location contains dirt. Simple reflex agent

  14. A vacuum-cleaner agent ☹ An agent program for this simple reflex vacuum agent is: function REFLEX_VACUUM_AGENT(location, status) returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left

  15. A vacuum-cleaner agent ☹ Problem: Suppose that a simple reflex vacuum agent is deprived of its location sensor and has only a dirt sensor. This agent has only two percepts: [Dirty] and [Clean]. It’s action is that it can Suck in response to [Dirty]. Moving Left fails (forever) if it happens to start in square A. Moving Right fails(forever) if it happens to start in square B. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments.

  16. Consider a drive of the automated car C. If the car in front brakes and its brake lights come on, then C should notice this and initiate braking. Based upon the visual input, the agent program C establishes the condition “The car in front is braking.” Then this condition triggers some established connection in the agent program to the action “initiate braking”. We call such a connection a condition-action rule, written as ifcar_in_front_is_brakingtheninitiate_braking. The previous simple reflex agent program is specific to one particular vacuum environment. We can write a general-purpose interpreter in schematic form for condition-action rules and then create rule sets for specific task environments. ☹

  17. Schematic diagram of a simple reflex agent. Use simple “if then” rules Agent Sensors What the world is like now Environment What action I should do now Condition-action rules Actuators

  18. This is a simple reflex agent. It acts according to a rule whose condition matches the current state, as defined by the percept. • function SIMPLE_REFLEX_AGENT(percept) returns an action • persistent: rules, a set of condition-action rules • state INTERPRET_INPUT(percept) • rule RULE_MATCH(state, rules) • action rule.ACTION • return action • The INTERPRET_INPUT(percept) function generates an abstracted description of the current state from the percept. • The RULE_MATCH(state, rules) function returns the first rule in the set of rules that matches the given state description.

  19. function SIMPLE_REFLEX_AGENT(percept) returns an action • persistent: rules, a set of condition-action rules • state INTERPRET_INPUT(percept) • rule RULE_MATCH(state, rules) • action rule.ACTION • return action • This will work • only if the correct decision can be made on the basis of only the current percept – that is, • only if the environment is fully observable. • Even a little bit of unobservability can cause serious trouble.

  20. More Examples of Artificial Agents

  21. Rational Agents The rationality of an agent depends on • the performance measure defining the agent’s degree of success (Performance) • the percept sequence, the sequence of all the things perceived by the agent (Sensors) • the agent’s knowledge of the environment (Environment) • the actions that the agent can perform (Actuators) For each possible percept sequence, an ideal rational agent does whatever possible to maximize its performance, based on the percept sequence and its built-in knowledge.

  22. Rational agents Definition of Rational Agent: (with respect to PEAS) • For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. [f: P* A]

  23. Rationality • A rational agent should strive to "do the right thing", based on • What is the right function [f: P* A] will cause the agent to be most successful? • what it can perceive and • the actions it can perform. • Can it [f: P* A] be implemented in a small agent program? • Performance measure: • An objective criterion for success of an agent's behavior. • A fixedperformance measure evaluates the sequence of observed action effects on the environment

  24. Rationality • E.g., performance measure of a vacuum-cleaner agent could be: • amount of dirt cleaned up, • amount of time taken, • amount of electricity consumed, • amount of noise generated, etc. • Fixed performance measure evaluates the environment sequence • one point per square cleaned up in time T? • one point per clean square per time step, minus one point per move? • penalize for > k dirty squares?

  25. Rational Agents An agent acts rationally, namely, "do the right thing", ? • What is rational at any give time depends on the following PEAS: • Performance measure – defines the criterion of success – (goals?) • Environment – the agent’s prior knowledge of the environment • Actuators – the actions that the agent can perform • Sensors – the agent’s percept sequence to date. • Must first specify the setting for intelligent agent design

  26. Rational agents • Rational omniscient: Rational is distinct from omniscient (know everything, with infinite knowledge) • Environment may be partially observable • An omniscient agent knows the actual outcome of its actions and can act accordingly. It is impossible in reality. • Rational clairvoyant: Rational is not clairvoyant (beyond ordinary perception) • Environment may be stochastic (randomly determined, may not be predicted precisely). • Rational successful : Thus Rational is not always successful

  27. Rational agents • Rational ⇒ exploration, learning, autonomy • Agents can perform actions in order • to modify future percepts so as • to obtain useful information (information gathering, exploration) • An agent is autonomous if • its behavior is determined by its own experience (with ability to learn and adapt)

  28. PEAS : Applications To design a rational agent, we must specify: • Performance measure • Environment • Actuators • Sensors

  29. PEAS : Applications Example: Taxi Driver Consider the task of designing agent type, an automated taxi driver. We need to define the PEAS from various “sources”: • Agent: Automated Taxi Driver • Performance measure? Safe, fast, legal, comfortable trip, maximize profits, … • Environment? Roads (streets/freeways), traffic, pedestrians, weather, … • Actuators? Steering wheel, accelerator, brake, signal, horn, display, speaker, … • Sensors? Cameras, speedometer, GPS, odometer, gauges, sonar, engine sensors, accelerometer

  30. PEAS : Applications • Agent: Medical diagnosis system • Performance measure? Healthy patient, minimize costs, lawsuits • Environment? Patient, hospital, staff • Actuators? Screen display (questions, tests, diagnoses, treatments, referrals) • Sensors? Keyboard (entry of symptoms, findings, patient's answers)

  31. PEAS : Applications • Agent: Part-picking robot • Performance measure? Percentage of parts in correct bins • Environment? Conveyor belt with parts, bins • Actuators? Jointed arm and hand • Sensors? Camera, joint angle sensors

  32. PEAS : Applications • Agent: Interactive English tutor • Performance measure? Maximize student's score on test • Environment? Set of students • Actuators? Screen display (exercises, suggestions, corrections) • Sensors? Keyboard

  33. PEAS : Applications Example: Internet shopping agent • Performance measure? price, quality, appropriateness, efficiency • Environment? current and future WWW sites, vendors, shippers • Actuators? display to user, follow URL, fill in form • Sensors? HTML pages and data (text, graphics, scripts)

  34. Environment types With respect to an agent, an environment may or may not be: • observable: the agent’s sensors detect all aspects of current percepts relevant to the choice of action • deterministic: the next state is completely determined by the current state and the actions selected by the agent • episodic: the agent’s experience is divided into “episodes”; the quality of the agent’s actions does not depend on previous episodes • static: it does not change while the agent is deliberating • discrete: there are a limited number of distinct, clearly defined percepts and actions • single-agent: there are not more agents in the environment …

  35. Environment types - Examples

  36. Environment types Fully observable(vs. partially observable): • The task environment is fully observable: • if an agent's sensors have access to the complete state of the environment. • The task environment is unobservable: • if the agent has no sensors at all. Deterministic (vs. stochastic): • …

  37. Environment types Fully observable(vs. partially observable): Deterministic (vs. stochastic): • The task environment is deterministic: • if the next state of the environment is completely determined by the current state and the action executed by the agent. • The task environment is stochastic: • otherwise, i.e., randomly determined). • The task environment is strategic: • if the environment is deterministic except for the actions of other agents. • …

  38. Environment types Fully observable(vs. partially observable, unobservable): Deterministic (vs. stochastic, strategic): Uncertain (vs. nondeterministic): • An environment is uncertain: • if it is not fully observable or not deterministic • A nondeterministic environment is: • one in which actions are characterized by possible outcomes without any attached probabilities. • …

  39. Environment types Episodic (vs. sequential): • The task environment is episodic: • if the agent's experience is divided into atomic "episodes" • each episode consists of the agent • perceiving (receives a percept) and then • performing a single action, and • the choice of action in each episode depends only on the episode itself. • Example: To spot defective parts on an assembly line, an agent • makes a decision on the current part, regardless of previous decisions. • The current decision doesn’t affect whether the next part is defective. • …

  40. Environment types Episodic (vs. sequential): • The task environment is episodic: • if the agent's experience is divided into atomic "episodes" • … • The task environment is sequential : • if the current decision could affect all future decision. • Example: Chess and Taxi Driver are sequential. • The short-term actions can have long-term consequences. • Episodic environments are much simpler than sequential environments because the agent does not need to think ahead.

  41. Environment types Static (vs. dynamic): • The environment is static for an agent: • if the environment is unchanged while the agent is deliberating. • The environment is dynamic: • Otherwise. • The environment is semidynamic • if the environment itself does not change with the passage of time but the agent's performance score does) Example: • Taxi driving is dynamic: the other cars and the taxi itself keep moving while the driving algorithm dithers about what to do next. • Chess, when played with a clock is semidynamic. • Crossword puzzles are static.

  42. Environment types Discrete (vs. continuous): • A limited number of distinct, clearly defined percepts and actions. • The discrete/continuous distinction applies • to the state of the environment, • to the way time is handled, and • to the percepts and actions of the agent. • Example:

  43. Environment types Discrete (vs. continuous): • Example: • The chess environment has a finite number of distinct states (excluding the clock). • Chess has a discrete set of percepts and actions. • Taxi driving is a continuous-state and continuous-time problem: • the speed and location of the taxi and of the other vehicles sweep through a range of continuous values and do so smoothly over time. • Taxi-driving actions are continuous (steering angles, etc.). • Input from digital cameras is discrete. • Could be treated as representing continuously varying intensities and location.

  44. Environment types Single agent(vs. multiagent): • An agent operating by itself in an environment. Example • An agent solving a crossword puzzle by itself is clearly in a single-environment, whereas • an agent playing chess is in a two agent environment. • Does an agent A (the taxi driver for example) have to treat an object B (another vehicle) as an agent or can it be treated merely as an object behaving according to the laws of physics, analogous to waves at the beach or leaves blowing in the wind? • The key distinction is whether B’s behavior is best described as maximizing a performance measure whose value depends on agent A’s behavior.

  45. Environment types Single agent(vs. multiagent): Competitive (vs. cooperative) • Example • In chess, the opponent entity B is trying to maximize its performance measure, which, by the rules of chess, minimize agent A’s performance measure. Thus, chess is a competitive multiagent environment. • In taxi-driving environment, avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multiagent environment. It is also partially competitivebecause, for example, only one car can occupy a parking space.

  46. Environment types Known (vs. unknown): • The distinction refers not to the environments itself but to the agent’s (or designer’s) state of knowledge about the “law of physics” of the environment. • In a knownenvironment, • the outcomes (or outcome probabilities if the environment is stochastic) for all actions are given. • If the environment is unknown, • the agent will have to learn how it works in order to make good decisions. Distinction Know and unknow environments from fully and partially observable environments: …

  47. Environment types Known (vs. unknown): • Distinction know and unknow environments from fully and partially observable environments: • It is quite possible for a knowenvironment to be partially observable: • for example, in solitaire card games, I know the rules but am still unable to see the cards that have not yet been turned over. • Anunknownenvironment can be fully observable: • for example, in a new video game, the screen may show the entire game state but I still don’t know what the buttons do until I try them.

  48. Agent functions and programs - Summary • Task environment vary along several significant dimensions. They can be: • fully or partially observable, unobservable • Access to the complete state of environment with respect to choice of actions • single-agent or multiagent, • One of more agents in the environment • deterministic or stochastic, strategic • Current state and the execution of action determine next state. Strategic if engaged also other agents’ actions. • episodic or sequential, • Divide the agent’s experience into “episodes”, each episode has the agent perceiving and then performing a single action. Choice of action in each episode depends on previous episode. It is sequential if current decision (short-term actions) could affect all future decision (long-term consequences. • Static, Dynamic, and Semi-dynamic • The environment is unchanged while the agent is deliberating • discrete or continuous, and • There are finite number of distinct, clearly defined percepts and actions • known or unknown. • All actions’ outcomes are given. Or have to learn how it works .

More Related