1 / 49

Understanding Agents and Environments

Learn about agents and their environments, including human agents, robotic agents, and software agents. Explore the structure of intelligent agents and the concept of rationality. Discover different types of agent environments and their properties.

jeffryc
Download Presentation

Understanding Agents and Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What'sanagent? An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. •A human agent has: eyes, ears, and other organs for sensors and hands, legs, vocal tract •A robotic agent might have: cameras and infrared range finders for sensors and various motors for actuators •A software agent receives keystrokes, file contents, and network packets as sensory inputs and acts on the environment by displaying on the screen, writing files, and sending network packets.

  2. Environment task • Wefirstspecifythesetting.Let'sdesignanautomated • taxi: • Performancemeasure: • Environment: • Actuators: • Sensors:

  3. AGENTS AND ENVIRONMENTS •Percept: agent’s perceptual inputs at an instant•The agent function maps from percept sequencesto actions:[f: P * A] •The agent program runs on the physical architecture to produce f agent = architecture + program

  4. Percept and percept sequence •Percept to refer to the agent’s perceptual inputs at any given instant •Percept sequence is the complete history of everything the agent has ever perceived •An agent’s choice of action at any given instant can depend on the entire percept sequence observed to date, but not on anything it hasn’t perceived

  5. AGENT FUNCTION & AGENT PROGRAM •Mathematically speaking, we say that an agent’s behavior is described by the agent function that maps any given percept sequence to an action. •Internally, the agent function for an artificial agent will be implemented by an agent program. •The agent function is an abstract mathematical description; the agent program is a concrete implementation, running within some physical system.

  6. The Structure of Intelligent Agents Agent’s structure can be viewed as :- Agent = Architecture + Agent Program Architecture = the machinery that an agent executes on. Agent Program = an implementation of an agent function.

  7. Simple Example: the vacuum-cleaner world •This particular world has just two locations: squares A and B •The vacuum agent perceives which square it is in and whether there is dirt in the square •It can choose to move left, move right, suck up the dirt, or do nothing •One very simple agent function is the following: if the current square is dirty, then suck; otherwise, move to the other square

  8. PEAS • We first specify the setting. Let's design an automated • taxi: • Performance measure: Be safe, reach destination, maximize profits, obey laws, . • Environment: Urban streets, freeways, traffic, • pedestrians, weather, customers, . • Actuators: Steering wheel, accelerator, brake, horn • Sensors:Video,accelerometers,gauges, engine • sensors,keyboard,GPS, .

  9. Anasideonactuatorsandsensors • Someagentscanmodifytheirownactuatorsandsensors • withtheuseoftools. • Wasps • Ravens • Dolphins • Gorillas • Human beings

  10. PEAS • A medical diagnosis system? • Performance measure: Healthy patient, minimal • costs, no lawsuits, . • Environment : Patient, hospital, pharmacy, doctors, nurses, equipment, . • Actuators: Screen display (questions, tests, diagnoses, treatment s, referrals, ...) • Sensors:Keyboard(entryofsymptoms,findings, patient'sanswers,...)

  11. PEAS • How about an Internet shopping agent? • Performance measure: Price, quality, appropriateness, efficiency, . • Environment: Current and future Web sites, vendors, shippers, ... • Actuators:Display to user, follow URL, fill in form • Sensors:Web pages(text,graphics,scripts...)

  12. Rationalagents •A rational agent is one that does the right thing •What is rational at any given time depends on four things: •The performance measure that defines the criterion of success. •The agent’s prior knowledge of the environment. •The actions that the agent can perform. •The agent’s percept sequence to date. .

  13. Rationalagents •definition of a rational agent: •For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has

  14. Agent Environments The critical decision an agent faces is determining which action to perform to best satisfy its design objectives. Agent environments are classified based on different properties that can affect the complexity of the agent’s decision-making process

  15. Question?? Is a knowledge-based agent also rational ?

  16. Agent Environments Accessible vs. inaccessible o An accessible environment is one in which the agent can obtain complete, timely and accurate information about the state of the environment. The more accessible an environment, the less complicated it is to build agents to operate within it. Most moderately complex environments are inaccessible.

  17. Agent Environments Deterministic vs. non-deterministic o Most reasonably, complex systems are non-deterministic – the state that will result from an action is not guaranteed even when the system is in a similar state before the action is applied. This uncertainty presents a greater challenge to the agent designer.

  18. Agent Environments Episodic vs. non-episodic o In an episodic environment, the actions of an agent depend on a number of discrete episodes with no link between the performance of the agent in different scenarios. This environment is simpler to design since there is no need to reason about interactions between this and future episodes; only the current environment needs to be considered

  19. Agent Environments Static vs. dynamic o Static environments remain unchanged except for the results produced by the actions of the agent. A dynamic environment has other processes operating on it thereby changing the environment outside the control of the agent. A dynamic environment obviously requires a more complex agent design.

  20. Agent Environments Discrete vs. continuous o If there are a fixed and finite number of actions and percepts, then the environment is discrete. A chess game is a discrete environment while driving a taxi is an example of a continuous one.

  21. Agent examples A simple example of an agent in a physical environment is a thermostat for a heater. The thermostat receives input from a sensor, which is embedded in the environment, to detect the temperature. Two states: (1) temperature too cold and (2) temperature OK are possible. Each state has an associated action: (1) too cold turn the heating on and (2) temperature OK turn the heating off. The first action has the effect of raising the room temperature, but this is not guaranteed. If cold air continuously comes into the room, the added heat may not have

  22. Agent examples the desired effect of raising the room temperature. Background software processes which monitor a software environment and perform actions to modify it can be viewed as agents. A software daemon that continually monitors a user’s incoming e-mail and indicates via a GUI icon whether there are unread messages can also be viewed as a simple agent.

  23. Environmentexample Crosswordpuzzle: Observable:Fully Agents:Single Deterministic:Deterministic Sequential Static Discrete Episodic: Static: Discrete:

  24. Characteristics of environments

  25. Characteristics of environments

  26. Characteristics of environments

  27. Characteristics of environments

  28. Characteristics of environments

  29. Characteristics of environments → Lots of real-world domains fall into the hardest case!

  30. Agents and Objects • Main differences: • agents are autonomous:agents embody stronger notion of autonomy than objects, and in particular, they decide for themselves whether or not to perform an action on request from another agent • agents are smart:capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior

  31. agents are active:a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control

  32. Agents and Expert Systems • Aren’t agents just expert systems by another name? • Expert systems typically disembodied ‘expertise’ about some (abstract) domain of discourse (e.g., blood diseases) • Example: MYCIN knows about blood diseases in humans

  33. Agents and Expert Systems • It has a wealth of knowledge about blood diseases, in the form of rules • A doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries

  34. Agents and Expert Systems • Main differences: • agents situated in an environment:MYCIN is not aware of the world — only information obtained is by asking the user questions • agents act:MYCIN does not operate on patients • Some real-time(typically process control) expert systems areagents

  35. Intelligent Agents and AI Aren’t agents just the AI project?Isn’t building an agent what AI is all about? AI aims to build systems that can (ultimately) understand natural language, recognize and understand scenes, use common sense, think creatively, etc. — all of which are very hard So, don’t we need to solve all of AI to build an agent…?

  36. Intelligent Agents and AI When building an agent, we simply want a system that can choose the right action to perform, typically in a limited domain We do nothave to solve allthe problems of AI to build a useful agent: a little intelligence goes a long way! Oren Etzioni, speaking about the commercial experience of NETBOT, Inc:“We made our agents dumber and dumber and dumber…until finally they made money.”

  37. Agent types Five basic types in order of increasing generality: Table Driven agent Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

  38. (0/1) Table-driven/reflex agent architecture

  39. Simple reflex agents Simple reflex agents select actions on the basis of current percept, ignoring rest of the percept history.

  40. (2) Architecture for an agent with memory

  41. (3) Architecture for goal-based agent

  42. goal-based agent The agent has some goal information that describes situations that are desirable. The agent may need to consider long sequences of actions to achieve the goal. Searchandplanningare subfields of AI to find such action sequences.

  43. (4) Architecture for a complete utility-based agent

  44. utility-based agent Goals alone do not guarantee high-quality behavior in most environments. A utility function is an internalization of the performance measure. The utility function allows the specification of an appropriate trade-off if needed. In reality, the expected utility is maximized.

  45. utility-based agent •For example, many action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others. •Goals just provide a crude binary distinction between "happy" and "unhappy" states. •A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent. •Because "happy" does not sound very scientific, economists and computer scientists use the term utility instead.

  46. question What is the difference between goal-based agents and utility-based agents?

More Related