1 / 103

Topic 4: Agent architectures

Topic 4: Agent architectures. general agent architectures deductive reasoning agents practical reasoning agents reactive agents hybrid agents. Agents: definition. M. Wooldridge An agent is a computer system … … that is situated in some environment, …

dinh
Download Presentation

Topic 4: Agent architectures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Topic 4: Agent architectures general agent architectures deductive reasoning agents practical reasoning agents reactive agents hybrid agents

  2. Agents: definition • M. Wooldridge • An agent is • a computer system … • … that is situated in some environment, … • … capable of autonomous action in this environment … • … in order to meet its design objectives. agent observation action environment

  3. agent properties • reactivity reacts to stimuli (changes in env., communication, …) • autonomy does not require user interaction • pro-activeness aims to achieve its own goals, therefore initiates appropriate actions • social ability cooperates / coordinates / communicates / … • embodied situated in the environment • mobile moves around network sites • learning learn from past experiences • … essential extra

  4. Agents versus Objects • Objects (Java / C++ / C# / Smalltalk / Eiffel / …) • encapsulate • state “attributes” / “data members” / … • behaviour “operations” / “methods” / … • represent real-world entity • own responsibility

  5. agent 1 agent 2 action Agents versus Objects (cont.) • differences • autonomy: who decides to execute a particular “action” • objects have control over state (through operations)objects do not have control over their behaviour • any object can call any public operation of the object • object cannot decide when to execute its behaviour • agents request an action from another agent […] • control lies entirely within receiving agent • cfr. humans • “objects do it for free, agents do it for money” because they want to”

  6. agent Agents versus Objects (cont.) • differences (cont.) • … • behaviour architecture: integration of flexible autonomous behaviour • objects • operations to offer behaviour • agents • integrate reactive behaviour social behaviour proactive behaviour … • cfr. humans

  7. agent Agents versus Objects (cont.) • differences (cont.) • … • inherently multi-threaded • objects • no separate thread of control • … active objects • agents • conceptually different threads of control

  8. Agents versus Expert Systems • Expert systems e.g. MYCIN • act as computerized consultant • for physicians • MYCIN knows about blood diseases in humans • a wealth of knowledge about blood diseases, in the form of rules • a doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries • differences • inherently disembodied • do not operate in an environment • no reactive/proactive behaviour • user-controlled • no social behaviour • as in cooperation / coordination / negotiation / …

  9. Agent architecture:how to do the right thing ? • Pattie Maes [1991] ‘[A] particular methodology for building [agents]. It specifies how . . . the agent can be decomposed into the construction of a set of component modules and how these modules should be made to interact. The total set of modules and their interactions has to provide an answer to the question of how the sensor data and the current internal state of the agent determine the actions . . . and future internal state of the agent.’  model of agent machinery • abstract architecture • elements • E set of environment states • Ac set of actions • Ag: E*  Ac (mapping)

  10. agents with state I internal agent state see E  Per action I  Ac next I x Per  I agent action see next state act observe environment

  11. Concrete agent architectures • Deductive reasoning agents • 1956 – present • “Agents make decisions about what to do via symbol manipulation. Its purest expression, proposes that agents use explicit logical reasoningin order to decide what to do.” • Practical reasoning agents • 1990 – present • “Agent use practical reasoning (towards actions, not towards beliefs) – beliefs / desires / intentions.” • Reactive agents • 1985 – present • “Problems with symbolic reasoning led to a reaction against this — led to the reactive agentsmovement.” • Hybrid agents • 1989 – present • “Hybrid architectures attempt to combine the best of reasoning and reactive architectures.”

  12. 1. Deductive reasoning agents • architectures based on ideas of “symbolic AI” • symbolic representation • environment • behaviour • goals • … • representation: logical formulae • syntactic manipulation: logical deduction / theorem proving Δ “deliberative agent”

  13. Deductive reasoning agents:Agents as theorem provers • deliberative agents • databases ofbeliefs are specified using formulae of first-order predicate logic • e.g. open (door1) closed (door2) visible (wall32) … • but: • just beliefs … not certified knowledge … • semantic not specified …open (door) may mean something exotic to the agent designer • set of deduction rules

  14. Deductive reasoning agents:Agents as theorem provers (cont.) • agent’s action selection function for eachaction a ifprecondition-for-action a can be inferred from current beliefs return a for each action a ifprecondition-for-action a is not excluded by current beliefs return a return null

  15. (0,2) (1,2) (2,2) robot (0,1) (1,1) (2,1) (0,0) (1,0) (2,0) Deductive reasoning agents:Agents as theorem provers (cont.) • vacuum example [Russell & Norvig, 1995] percept: dirt or null. actions: forward, suck, or turn. goal: transverse room, remove dirt

  16. domain predicates In (x,y) agent is at (x,y) Dirt (x,y) there is dirt at (x,y) Facing (d) agent is facing direction d Wall (x,y) there is a wall at (x,y) • deduction rules x y Wall(x,y)  Free(x,y) • cleaning action rule (will take priority) In (x,y)  Dirt (x,y)  Do (suck) • if agent is at location (x,y) and perceives dirt: remove dirtotherwise transverse the world • For example... In(0,0)  Facing(north)  Free (0,1)  Dirt(0,0)  Do(forward) In(0,1)  Facing(north)  Free (0,2)  Dirt(0,1)  Do(forward) In(0,2)  Facing(north)  Dirt(0,2)  Do(turn) In(0,2)  Facing(east)  Free (1,2)  Do(forward)…

  17. Deductive reasoning agents:Agents as theorem provers (cont.) • “calculative rationality” • the selected action is the result of decision making on state in the beginning of the process of decision making • not acceptable in environments that change faster than decision making

  18. Deductive reasoning agents:Agents as theorem provers (cont.) • Advantages • clean logical semantics • expressive • well-researched domain of logic • Problems • how to build internal representation from percepts • e.g. image  logical formulae • inherent computational complexity of theorem proving timely functions ! • many (most) search-based symbol manipulation algorithms of interest are highly intractable

  19. Concrete agent architectures • Deductive reasoning agents • 1956 – present • “Agents make decisions about what to do via symbol manipulation. Its purest expression, proposes that agents use explicit logical reasoningin order to decide what to do.” • Practical reasoning agents • 1990 – present • “Agent use practical reasoning (towards actions, not towards beliefs) – beliefs / desires / intentions.” • Reactive agents • 1985 – present • “Problems with symbolic reasoning led to a reaction against this — led to the reactive agentsmovement.” • Hybrid agents • 1989 – present • “Hybrid architectures attempt to combine the best of reasoning and reactive architectures.”

  20. 2. Practical Reasoning Agents • what is practical reasoning ? • “reasoning directed towards actions” • “practical reasoning is a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes.” [Bratman] • distinguish practical reasoning from theoretical reasoning: • theoretical reasoning is directed towards beliefs • practical reasoning is directed towards actions

  21. BDI architectures • BDI - a theory of practical reasoning - Bratman, 1988 • for “resource-bounded agent” • includes • means-end analysis • weighting of competing alternatives • interactions between these two forms of reasoning • Core concepts • Beliefs = information the agent has about the world • Desires = state of affairs that the agent would wish to bring about • Intentions = desires (or actions) that the agent has committed to achieve

  22. BDI particularly compelling because: • philosophical component - based on a theory of rational actions in humans • software architecture - it has been implemented and successfully used in a number of complex fielded applications • IRMA - Intelligent Resource-bounded Machine Architecture • PRS - Procedural Reasoning System • logical component - the model has been rigorously formalized in a family of BDI logics • Rao & Georgeff, Wooldrige • (Int Ai )   (Bel Ai)

  23. Practical Reasoning Agents (cont.) • human practical reasoningpractical reasoning = deliberation + means-ends reasoning • deliberation • deciding whatstate of affairs you want to achievethe outputs of deliberation are intentions • means-ends reasoning • deciding howto achieve these states of affairsthe outputs of means-ends reasoning are plans

  24. Practical Reasoning Agents (cont.) • deliberation • intentions • means-ends reasoning • planning • architecture

  25. Practical Reasoning Agents:1. Deliberation: Intentions and Desires • intentions are stronger than desires • “My desire to play basketball this afternoon is merely a potential influencer of my conduct this afternoon. It must vie with my other relevant desires [. . . ] before it is settled what I will do. In contrast, once I intend to play basketball this afternoon, the matter is settled: I normally need not continue to weigh the pros and cons. When the afternoon arrives, I will normally just proceed to execute my intentions.” [Bratman, 1990]

  26. Practical Reasoning Agents: Intentions • agents are expected to determine ways of achieving intentions • If I have an intention to Φ, you would expect me to devote resources to deciding how to bring about Φ • agents cannot adopt intentions which conflict • If I have an intention to Φ, you would not expect me to adopt an intention Ψthat was incompatible with Φ • agents are inclined to try again if their attempts to achieve their intention fail • If an agent’s first attempt to achieve Φfails, then all other things being equal, it will try an alternative plan to achieve Φ • agents believe their intentions are possible • That is, they believe there is at least some way that the intentions could be brought about. • agents do not believe they will not bring about their intentions • It would not be rational of me to adopt an intention to Φ if I believed that I would fail with Φ • under certain circumstances, agents believe they will bring about their intentions • If I intend Φ, then I believe that under “normal circumstances” I will succeed withΦ • agents need not intend all the expected side effects of their intentions • I may believe that going to the dentist involves pain, and I may also intend to go to the dentist — but this does not imply that I intend to suffer pain!

  27. Practical Reasoning Agents:2. Means-ends Reasoning intention (goal / task) beliefs (state of environment) possible actions planner means-ends reasoning plan to achieve goal

  28. The Blocks World • illustrate means-ends reasoning with reference to the blocks world • Contains a robot arm, 3 blocks (A, B, and C) of equal size, and a table-top A B C

  29. The Blocks World • Here is a representation of the blocks world described above:Clear(A) On(A, B) OnTable(B) OnTable(C) • Use the closed world assumption: anything not stated is assumed to be false

  30. The Blocks World • A goalis represented as a set of formulae • Here is a goal:OnTable(A)  OnTable(B)  OnTable(C) B C A

  31. The Blocks World • Actionsare represented using a technique that was developed in the STRIPS planner • Each action has: • a namewhich may have arguments • a pre-condition listlist of facts which must be true for action to be executed • a delete listlist of facts that are no longer true after action is performed • an add listlist of facts made true by executing the action Each of these may contain variables

  32. The Blocks World Operators • Example 1:The stackaction occurs when the robot arm places the object xit is holding is placed on top of object y.Stack(x, y)preClear(y)  Holding(x)delClear(y)  Holding(x)addArmEmpty  On(x, y) A B

  33. The Blocks World Operators • Example 2:The unstackaction occurs when the robot arm picks an object xup from on top of another object y.UnStack(x, y)preOn(x, y)  Clear(x)  ArmEmptydelOn(x, y)  ArmEmpty addHolding(x)  Clear(y)Stack and UnStack are inversesof one-another. A B

  34. The Blocks World Operators • Example 3:The pickupaction occurs when the arm picks up an object xfrom the table.Pickup(x)preClear(x)  OnTable(x)  ArmEmptydelOnTable(x)  ArmEmpty addHolding(x) • Example 4:The putdownaction occurs when the arm places the object xonto the table.Putdown(x)preHolding(x)delHolding(x) addClear(x)  OnTable(x)  ArmEmpty

  35. A Plan • What is a plan?A sequence (list) of actions, with variables replaced by constants. a142 a1 I G a17

  36. percepts Belief revision Beliefs Knowledge B = brf(B, p) Opportunity analyzer Deliberation process Desires D = options(B,D, I) Intentions Filter I = filter(B, D, I) Means-ends reasoner Intentions structured in partial plans  = plan(B, I) Library of plans Plans Executor actions 3. BDI Architecture

  37. what are the options (desires) ? • how to choose an option ? • incl. filter • chosen option  intention … • when to reconsider intentions !? Practical Reasoning Agents (cont.) • agent control loop while true observe the world; update internal world model; deliberate about what intention to achieve next; use means-ends reasoning to get a plan for the intention; execute the plan end while

  38. Implementing Practical Reasoning Agents • Let’s make the algorithm more formal:

  39. Implementing Practical Reasoning Agents • this version: optimal behaviour if • deliberation and means-ends reasoning take a vanishingly small amount of time;or • the world is guaranteed to remain static while the agent is deliberating and performing means-ends reasoning;or • an intention that is optimal when achieved at time t0 (the time at which the world is observed) is guaranteed to remain optimal until time t2 (the time at which the agent has found a course of action to achieve the intention).

  40. Deliberation • The deliberate function can be decomposed into two distinct functional components: • option generationin which the agent generates a set of possible alternatives;Represent option generation via a function, options, which takes the agent’s current beliefs and current intentions, and from them determines a set of options (= desires) • filteringin which the agent chooses between competing alternatives, and commits to achieving them.In order to select between competing options, an agent uses a filterfunction.

  41. Deliberation

  42. Practical Reasoning Agents (cont.) • If an option has successfully passed trough the filter function and is chosen by the agent as an intention, we say that the agent has made a commitment to that option • Commitment implies temporal persistence of intentions; once an intention is adopted, it should not be immediately dropped out. Question: How committed an agent should be to its intentions? • degrees of commitments • blind commitment • ≈ fanatical commitment: continue until achieved • single-minded commitment • continue until achieved or no longer possible • open-minded commitment • continue until no longer believed possible

  43. Commitment Strategies • An agent has commitment both • to ends(i.e.,the wishes to bring about) • and means(i.e., the mechanism via which the agent wishes to achieve the state of affairs) • current version of agent control loop is overcommitted, both to means and ends modification: replanif ever a plan goes wrong

  44. Reactivity, replan “Blind commitment”

  45. Commitment Strategies • this version still overcommitted to intentions: • never stops to consider whether or not its intentions are appropriate  modification: stop for determining whether intentions have succeeded or whether they are impossible: “Single-minded commitment”

  46. Single-minded Commitment Dropping intentions that are impossible or have succeeded Reactivity, replan

  47. Intention Reconsideration • Our agent gets to reconsider its intentions when: • it has completely executed a plan to achieve its current intentions; or • it believes it has achieved its current intentions; or • it believes its current intentions are no longer possible.  This is limited in the way that it permits an agent to reconsiderits intentions  modification: Reconsider intentions after executing every action “Open-minded commitment”

  48. Open-minded Commitment

  49. Intention Reconsideration • But intention reconsideration is costly!A dilemma: • an agent that does not stop to reconsider its intentions sufficiently often will continue attempting to achieve its intentions even after it is clear that they cannot be achieved, or that there is no longer any reason for achieving them • an agent that constantlyreconsiders its attentions may spend insufficient time actually working to achieve them, and hence runs the risk of never actually achieving them • Solution: incorporate an explicit meta-level controlcomponent, that decides whether or not to reconsider

  50. meta-level control

More Related