1 / 63

Intelligent Agents

Intelligent Agents. Franco GUIDI POLANCO Politecnico di Torino / CIM Group http://www.cim.polito.it franco.guidi@polito.it 09-APR-2003. Agenda. Introduction Abstract Architectures for Autonomous Agents Concrete Architectures for Intelligent Agents Multi -Agent Systems Summary.

meda
Download Presentation

Intelligent Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intelligent Agents Franco GUIDI POLANCO Politecnico di Torino / CIM Group http://www.cim.polito.it franco.guidi@polito.it 09-APR-2003 Franco Guidi P.

  2. Agenda • Introduction • Abstract Architectures for Autonomous Agents • Concrete Architectures for Intelligent Agents • Multi-Agent Systems • Summary Franco Guidi P.

  3. Introduction Franco Guidi P.

  4. What agents are • “One who is authorised to act for or in place of anotheras a: a representative, emissary, or official of a government <crown agent> <federal agent> b: one engaged in undercover activities (as espionage) : SPY <secret agent> c: a business representative (as of an athlete or entertainer) <a theatrical agent>” Franco Guidi P.

  5. What agents are • "An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors." Russell & Norvig Franco Guidi P.

  6. What agents are • "Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed." Pattie Maes Franco Guidi P.

  7. What agents are • “Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions.” Barbara Hayes-Roth Franco Guidi P.

  8. What agents are • "Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires." IBM's Intelligent Agent Strategy white paper Franco Guidi P.

  9. What agents are • Definition that refers to “agents” (and not “intelligent agents”): “An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.” Wooldridgep & Jennings Franco Guidi P.

  10. What agents are Franco Guidi P.

  11. sensor input action output Agent Environment Agents & Environments • The agent takes sensory input from its environment, and produces as output actions that affect it. Franco Guidi P.

  12. Agents & Environments (cont.) • In complex environments: • An agent do not have complete control over its environment, it just have partial control • Partial control means that an agent can influence the environment with its actions • An action performed by an agent may fail to have the desired effect. • Conclusion: environments are non-deterministic, and agents must be prepared for the possibility of failure. Franco Guidi P.

  13. Agents & Environments (cont.) • Effectoric capability: agent’s ability to modify its environment. • Actions have pre-conditions • Key problem for an agent: deciding which of its actions it should perform in order to best satisfy its design objectives. Franco Guidi P.

  14. N Examples of agents • Control systems • e.g. Thermostat • Software daemons • e.g. Mail client But… are they known as IntelligentAgents? Franco Guidi P.

  15. What is “intelligence”? Franco Guidi P.

  16. What intelligent agents are • “An intelligent agent is one that is capable of flexible autonomous action in order to meet its design objectives, where flexible, I mean three things: • reactivity: agents are able to perceive their environment, and respond in a timely fashion to changes that occur in itin order to satisfy its design objectives; • pro-activeness: intelligent agents are able to exhibit goal-directed behaviour by taking the initiative in order to satisfy its design objectives; • social ability: intelligent agents are capable of interacting with other agents (and possibly humans) in order to satisfy its design objectives”; Wooldridgep & Jennings Franco Guidi P.

  17. Agent characteristics Weak notion of agent • Autonomy • Proactiveness (Goal oriented) • Reactivity • Socially able (a.k.a. communicative) Strong notion of agent • Weak notion + • Mobility • Veracity • Benevolence • Rationality An Agent has the weak agent characteristics. It may have the strong agent characteristics. (Amund Tveit) Franco Guidi P.

  18. sayHelloToThePeople() say Hello to the people “Hello People!” Objects & Agents Object • Classes control its states • Agents control its states and behaviours “Objects do it for free; agents do it for money” Franco Guidi P.

  19. Objects & Agents (cont.) • Distinctions: • Agents embody stronger notion of autonomy than objects • Agents are capable of flexible (reactive, pro-active, social) behaviour • A multi-agent system is inherently multi-threaded Franco Guidi P.

  20. Abstract Architectures for Autonomous Agents Franco Guidi P.

  21. Formalization • Agents • Standard agents • Purely reactive agents • Agents with state • Environments • History • Perception Franco Guidi P.

  22. Agent action output sensor input Environment Agents & Environments • Agent’s environment states characterised by a set: S={ s1,s2,…} • Effectoric capability of the Agent characterised by a set of actions: A={ a1,a2,…} Franco Guidi P.

  23. Standard agents • A Standardagent decides what action to perform on the basis of his history (experiences). • A Standard agent can be viewed as function action:S*  A S* is the set of sequences of elements of S. Franco Guidi P.

  24. Environments • Environments can be modeled as function env:S x A  P(S) whereP(S) is the powerset of S; This function takes the current state of the environment sS and an action aA (performed by the agent), and maps them to a set of environment states env(s,a). • Deterministic environment: all the sets in the range of env are singletons. • Non-deterministic environment: otherwise. Franco Guidi P.

  25. a0 a1 a2 au-1 au h:s0s1s2 … su History • History represents the interaction between an agent and its environment. A history is a sequence: Where: s0 is the initial state of the environment au is the u’th action that the agent choose to perform su is the u’th environment state Franco Guidi P.

  26. Purely reactive agents • A purely reactive agent decides what to do without reference to its history (no references to the past). • It can be represented by a function action: S  A • Example: thermostat Environment states: temperature OK; too cold heater off if s = temperature OK action(s) = heater on otherwise Franco Guidi P.

  27. Perception • see and action functions: Agent see action Environment Franco Guidi P.

  28. Perception (cont.) • Perception is the result of the function see: S  P where • P is a (non-empty) set of percepts (perceptual inputs). • Then, the action becomes: action: P*  A which maps sequences of percepts to actions Franco Guidi P.

  29. Perception ability Non-existent perceptual ability Omniscient MAX MIN | E | = 1 | E | = | S | where E: is the set of different perceived states Two different states s1 S and s2 S (with s1 s2) are indistinguishable if see( s1 ) = see( s2 ) Franco Guidi P.

  30. Perception ability (cont.) • Example: x = “The room temperature is OK” y = “There is no war at this moment” then: S={ (x,y),(x,y),(x,y),(x,  y)} s1 s2 s3 s4 but for the thermostat: p1 if s=s1 or s=s2 see(s) = p2 if s=s3 or s=s4 Franco Guidi P.

  31. Agents with state • see,nextandactionfunctions Agent see action state next Environment Franco Guidi P.

  32. Agents with state (cont.) • The same perception function: see: S  P • The action-selection function is now: action: I  A where I: set of all internal states of the agent • An aditional function is introduced: next: I x P  I Franco Guidi P.

  33. Agents with state (cont.) • Behaviour: • The agent starts in some internal initialstatei0 • Then observes its environment state s • The internalstate of the agent is updated with next(i0,see(s)) • The action selected by the agent becomes action(next(i0,see(s))), and it is performed • The agent repeats the cycle observing the environment Franco Guidi P.

  34. Concrete Architectures for Intelligent Agents Franco Guidi P.

  35. Classes of agents • Logic-based agents • Reactive agents • Belief-desire-intention agents • Layered architectures Franco Guidi P.

  36. Logic-based architectures • “Traditional” approach to build artificial intelligent systems: • Logical formulas: symbolic representation of its environment and desired behaviour. • Logical deduction or theorem proving: syntactical manipulation of this representation. and grasp(x) Kill(Marco, Caesar) or Pressure( tank1, 220) Franco Guidi P.

  37. Logic-based architectures: example • A cleanning robot • In(x,y) agent is at (x,y) • Dirt(x,y) there is a dirt at (x,y) • Facing(d) the agent is facing direction d Franco Guidi P.

  38. Logic-based architectures: abstraction • Let L be the set of sentences of classical first-order logic • Let D=P(L) be the set of L databases (the internal state of the agent is element of D), and 1,2,.. memebers of D • The agent decision making rules are modelled through a set of deduction rules,  • |  means that formula  can be proved from database  using only the deduction rules  Franco Guidi P.

  39. Logic-based architectures: abstraction (cont.) • The perception function remains unchanged: see: S  P • The next function is now : next: D x P  D • The action function becomes: action: D  A Franco Guidi P.

  40. Logic-based architectures: abstraction (cont.) • Pseudo-code of function action is: • begin function action • for each a  A do • if  | Do(a) then return a • for each a  A do • If  |   Do(a) then return a • return null • end function action Franco Guidi P.

  41. Reactive architectures • Forces: • Rejection of symbolic representations • Rational behaviour is seen innately linked to the environment • Intelligent behaviour emerges from the interaction of various simpler behaviours situation  action Franco Guidi P.

  42. Reactive architectures: example • A mobile robot that avoids obstacles • ActionGoTo (x,y): moves to position (x,y) • ActionAvoidFront(z): turn left or rigth if there is an obstacle in a distance less than z units. Franco Guidi P.

  43. Belief-Desire-Intention (BDI) architectures • They have their Roots in understandingpractical reasoning. • It involves two processes: • Deliberation: deciding what goals we want to achieve. • Means-ends reasoning: deciding how we are going to achieve these goals. Franco Guidi P.

  44. BDI architectures (cont.) • First: try to understand what options are available. • Then: choose between them, and commit to some. These choosen options become intentions, which then determine the agent’s actions. Franco Guidi P.

  45. BDI architectures (cont.) • Intentions are important in practical reasoning: • Intentions drive means-end reasoning • Intentions constrain future deliberation • Intentions persist • Intentions influence beliefs upon which future reasoning is based Franco Guidi P.

  46. BDI architectures: reconsideration of intentions • Example (taken from Cisneros et al.) P Time t=0 Desire: Kill the alien Intention: Reach point P Belief: The alien is at P Franco Guidi P.

  47. BDI architectures: reconsideration of intentions Q P Time t=1 Desire: Kill the alien Intention: Reach point P Belief: The alien is at P Wrong! Franco Guidi P.

  48. BDI architectures: reconsideration of intentions • Dilemma: • If intentions are not reconsidered sufficiently often, the agent can continue to aim to an unreachable or no longer valid goal (bold agents) • If intentions are constantly reconsidered, the agent can fail to dedicate sufficient work to achieve any goal (cautious agents) • Some experiments: • Environments with low rate of change: better bold agents than cautious ones. • Environments with high rate of change: the opposite. Franco Guidi P.

  49. Layered architectures • To satisfy the requirement of integrating a reactive and a proactive behaviour. • Two types of control flow: • Horizontal layering: software layers are each directly connected to the sensory input and action output. • Vertical layering: sensory input and action output are each dealt with by at most one layer each. Franco Guidi P.

  50. Layered architectures: horizontal layering • Advantage: conceptual simplicity (to implement n behaviours we implement n layers) • Problem: a mediator function is required to ensure the coherence of tje overall behaviour Layer n … action output perceptual input Layer 2 Layer 1 Franco Guidi P.

More Related