1 / 27

Intelligent Agent

Intelligent Agent. Agent interacting with Environment. Agents include Human, robots, soft bots, thermostats, etc. The agent function maps from percept histories to actions. What Is An Intelligent Agent. An agent is anything that can be viewed as perceiving

Download Presentation

Intelligent Agent

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intelligent Agent

  2. Agent interacting with Environment • Agents include Human, robots, soft bots, thermostats, etc. • The agent function maps from percept histories to actions 2

  3. What Is An Intelligent Agent An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors. A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth, and other body parts for effectors. A robotic agent substitutes cameras and infrared range finders for the sensors and various motors for the Effector.A software agent has encoded bit strings as its percepts and actions 3

  4. DecidingHow and when to evaluate the agent's success • performance measure=how successful an agent is? Consider the case of an agent that is supposed to vacuum a dirty floor. A plausible performance measure would be the amount of dirt cleaned up in a single eight-hour shift. A more sophisticated performance measure would factor in the amount of electricity consumed and the amount of noise generated as well. • In summary, what is rational at any given time depends on four things: •The performance measure that defines degree of success. • Everything that the agent has perceived so far. We will call this complete perceptual history the percept sequence. • What the agent knows about the environment. • The actions that the agent can perform. Note: • We need to be careful to distinguish between rationality and omniscience. An omniscient agent knows the actual outcome of its actions, 4

  5. HOW AGENTS SHOULD ACT? • A rational agent is one that does the right thing. • The right action is the one that will cause the agent to be most successful. 5

  6. Ideal Rational Agent For each possible percept sequence, an ideal ration agent should do whatever action is Expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has 6

  7. Autonomy If the agent's actions are based completely on built-in knowledge, such that it need pay no attention to its percepts, then we say that the agent lacks autonomy. For example, if the clock manufacturer was prescient enough to know that the clock's owner would be going to Australia at some particular date, then a mechanism could be built in to adjust the hands automatically by six hours at just the right time. This certainly be successful behavior, but the intelligence seems to belong to the clock's designer rather than to the clock itself. An agent's behavior can be based on both its own experience and the built-in knowledge used in constructing the agent for the particular environment in which it operates. A system is autonomous to the extent that its behavior is determined by its own experience. 7

  8. STRUCTURE OF INTELLIGENT AGENTS The job of AI is to design the agent program: a function that implements the agent mapping from perepts to actions. We assume this program will run on some sort of computing device, which we will call the architecture. agent = architecture + program 8

  9. Performance Measure, Environment, Actuators, Sensors • To design a rational agent, we must specify task environment which consists of PEAS (Performance Measure, Environment, Actuators, Sensors) • Taxi Driver • Performance measure : safety, fast, legal, confortable trip, maximize profits • Environment : Roads, other traffic, pedestrains, customers • Actuators : steering, accelerator, brake, signal, horn, display • Sensors : cameras, sonar, speedometer, GPS, odometer, accelerometer, engine sensors, keyboard or microphone to accept destination 9

  10. Fig 2.3 10

  11. Example 11

  12. Internet Shopping Agent • Performance Measure : • Environment : • Actuators : • Sensors : 12

  13. Properties of Task Environments • Fully observable vs. Partially observable • Deterministic vs. Stochastic • Strategic : deterministic except for actions of other agents • Episodic vs. Sequential • Static vs. Dynamic • Discrete vs. Continuous • Single Agents vs. Multiagent • Competitive, cooperative 13

  14. Task Environment Types Real world is … 14

  15. 2-3. Structure of Intelligent Agents perception action ? Agent = architecture + program 15

  16. Types of agents • Four basic types • Simple Reflex Agent • Reflex Agent with state • that keeps track of the world • Also called model-based reflex agent • Goal-based agent • Utility-based agent • All these can be turned into Learning Agents Generality 16

  17. (1) Simple Reflex Agent • characteristics • no plan, no goal • do not know what they want to achieve • do not know what they are doing • condition-action rule • If condition then action • architecture - [fig. 2.9] program - [fig. 2.10] 17

  18. Fig 2.9 Simple reflex agent 18

  19. Fig 2.10 Simple Reflex Agent 19

  20. (2) Model-based Reflex Agents • Characteristics • Reflex agent with internal state • Sensor does not provide the complete state of the world. • Updating the internal world • requires two kinds of knowledge which is called model • How world evolves • How agent’s action affect the world • architecture - [fig 2.11] program - [fig 2.12] 20

  21. Fig 2.11 A model-based Agent 21

  22. (3) Goal-based agents • Characteristics • Action depends on the GOAL . (consideration of future) • Goal is desirable situation • Choose action sequence to achieve goal • Needs decision making • fundamentally different from the condition-action rule. • Search and Planning • Appears less efficient, but more flexible • Because knowledge can be provided explicitly and modified • Architecture - [fig 2.13] 22

  23. Fig 2.13 A model-based, Goal-based Agent 23

  24. (4) Utility-based agents • Utility function • Degree of happiness • Quality of usefulness • map the internal states to a real number • (e.g., game playing) • Characteristics • to generate high-quality behavior • Rational decisions are made • Looking for higher Utility value • Expected Utility Maximizer • Explore several goals • Structure - [fig 2.14] 24

  25. Fig 2.14 A Model-based, Utility-based Agent 25

  26. Learning Agents • Improve performance based on the percepts • 4 components • Learning elements • Making improvement • Performance elements • Selecting external actions • Critic • Tells how well the agent doing based on fixed performance standard • Problem generator • Suggest exploratory actions 26

  27. General Model of Learning Agents 27

More Related