1 / 42

Artificial Intelligence Lecture No. 5

Artificial Intelligence Lecture No. 5 . Dr. Asad Safi ​ Assistant Professor, Department of Computer Science,  COMSATS Institute of Information Technology (CIIT) Islamabad, Pakistan. Summary of Previous Lecture. What is an Intelligent agent? Agents & Environments Performance measure

donkor
Download Presentation

Artificial Intelligence Lecture No. 5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial IntelligenceLecture No. 5 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science,  COMSATS Institute of Information Technology (CIIT) Islamabad, Pakistan.

  2. Summary of Previous Lecture • What is an Intelligent agent? • Agents & Environments • Performance measure • Environment • Actuators • Sensors • Features of intelligent agents

  3. Today’s Lecture • Different types of Environments • IA examples based on Environment • Agent types

  4. Environments • Actions are done by the agent on the environment. • Environment provides percepts to the agent. • Determine to a large degree the interaction between the “outside world” and the agent • the “outside world” is not necessarily the “real world” as we perceive it • it may be a real or virtual environment the agent lives in • In many cases, environments are implemented within computers • They may or may not have a close correspondence to the “real world”

  5. Properties of environments • Fully observable vs. partially observable • Or Accessible vs. inaccessible • If an agent’s sensory equipment gives it access to the complete state of the environment, then we say that environment is fully observable to the agent. • An environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action. • A fully observable environment is convenient because the agent need not maintain any internal state to keep track of the world.

  6. Properties of environments • Deterministic vs. nondeterministic. • If the next state of the environment is completely determined by the current state and the actions selected by the agents, then we say the environment is deterministic. • If the environment is inaccessible, then it may appear to be nondeterministic (bunch of uncertainties).

  7. Properties of task environments • Episodic vs. sequential. • Agent’s experience is divided into “episodes.” • Each episode consists of the agent perceiving and acting. • Subsequent episodes do not depend on what actions occur in previous episodes. • In sequential environments current actions affect all succeeding actions

  8. Properties of task environments • Static vs. Dynamic • If the environment can change while an agent is performing action, then we say the environment is dynamic. • Else its static. • Static environments are easy to deal with, because the agent does not keep on looking at the environment while it is deciding on an action. • Semidynamic: if the environment does not change with the passage of time but the agent performance score does.

  9. Properties of environments • Discrete vs. continuous • If there are a limited number of distinct, clearly defined percepts and actions, we say that the environment is discrete. • Chess, since there are a fixed number of possible moves on each turn. • Taxi driving is continuous.

  10. Properties of environments • Single agent vs. Multiagent • In the single agent environment there is only one agent • A computer software playing crossword puzzle • In multiagent systems, there are more than one active agents • Video games

  11. EnvironmentExamples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  12. EnvironmentExamples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  13. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  14. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  15. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  16. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  17. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  18. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  19. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  20. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  21. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  22. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  23. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  24. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  25. Environment Examples Fully observable vs. partially observable Deterministic vs. stochastic / strategic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Single agent vs. multiagent

  26. Agent types • Four basic types in order of increasing generalization: • Simple reflex agents • Reflex agents with state/model • Goal-based agents • Utility-based agents

  27. Simple Reflex Agent • Instead of specifying individual mappings in an explicit table, common input-output associations are recorded • Requires processing of percepts to achieve some abstraction • Frequent method of specification is through condition-action rules • ifperceptthenaction • If car-in-front-is-braking then initiate-braking • Similar to innate reflexes or learned responses in humans • Efficient implementation, but limited power • Environment must be fully observable • Easily runs into infinite loops

  28. Simple reflex agents

  29. Simple Reflex Agent • function SIMPLE-REFLEX-AGENT (percept) returns action • static: rules, a set of condition-action rules • state ← INTERPRET-INPUT (percept) • rule ← RULE-MATCH (state, rules) • action ← RULE-ACTION [rule] • returnaction

  30. A simple reflex agent.. which works by finding a rule whose condition matches the current situation and then doing the action associated with that rule

  31. Reflex agents with state/model • Evan a little bit of un observability can cause serious trouble. • The braking rule given earlier assumes that the condition car-in-front-is-braking can be determined from the current percept – the current video image. • More advanced techniques would require the maintenance of some kind of internal state to choose an action.

  32. Reflex agents with state/model • An internal state maintains important information from previous percepts • Sensors only provide a partial picture of the environment • Helps with some partially observable environments • The internal states reflects the agent’s knowledge about the world • This knowledge is called a model • May contain information about changes in the world

  33. Model-based reflex agents • Required information: • How the world evolves independently of the agent? • An overtaking car generally will be closer behind than it was a moment ago. • The current percept is combined with the old internal state to generate the updated description of the current state.

  34. Model-based reflex agents

  35. Model-based reflex agents • function REFLEX-AGENT-WITH-STATE (percept) returns an action • static: state, a description of the current world state rules, a set of condition-action rules action, the most recent action, initially none • state← UPDATE-STATE (state, action, percept) • rule ← RULE-MATCH (state, rules) • action ← RULE-ACTION [rule] • state ← UPDATE-STATE (state, action) • return action

  36. Goal-based agent • Merely knowing about the current state of the environment is not always enough to decide what to do next. • The right decision depends on where the taxi is trying to get to. • So the goal information is also needed.

  37. Goal-based agent • Goal-based agents are far more flexible. • If it starts to rain, the agent adjusts itself to the changed circumstances, since it also looks at the way its actions would affect its goals (remember doing the right thing). • For the reflex agent we would have to rewrite a large number of condition-action rules.

  38. Goal-based agents

  39. Utility-based agents • Goals are not really enough to generate high-quality behavior. • There are many ways to reach the destination, but some are qualitatively better than others. • More safe • Shorter • Less expensive

  40. Utility-based agent • We say that if one world state is preferred to another, then it has higher utility for the agent. • Utility is a function that maps a state onto a real number. • state → R • Any rational agent possesses a utility function.

  41. Utility-based agents

  42. Summery of Today’s Lecture • Different types of Environments • IA examples based on Environment • Agent types • Simple reflex agents • Reflex agents with state/model • Goal-based agents • Utility-based agents

More Related