1 / 28

Concrete architectures (Section 1.4) Part II: Shabbir Ssyed

This text describes four classes of agents and various concrete architectures, including logic-based agents, reactive agents, and belief-desire-intention agents.

castilloj
Download Presentation

Concrete architectures (Section 1.4) Part II: Shabbir Ssyed

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concrete architectures (Section 1.4)Part II: Shabbir Ssyed We will describe four classes of agents: • Logic based agents • Reactive agents • Belief-desire-intention agents • Layered architectures

  2. Reactive architectures (Section 1.4) Subsumption architecture: Rodney Brooks • Task accomplishing behavior. Situation Action. • Many behaviors can fire simultaneously. Subsumption hierarchy: Lower layer has higher priority than higher layers.

  3. Background • Emergent behavior • Ant colony • Artificial life • Intelligence without reason • Intelligence without representation

  4. Simple algorithm • Var fired:f(R) • Var selected: A • Begin • fired:={(c,a)|(c,a)  R and p c} • for each (c,a) fired do • if ¬( (c’,a’)  fired such that(c’,a’)<(c,a))then • return a • End-if • End-for • Function action(p:P):A • Return null • End function action

  5. Robot scenario • If detect an obstacle then change direction (1.6) • If carrying samples and at the base then drop samples (1.7) • If carrying samples and not at base then travel upgradient (1.8) • If detect a sample then pick sample up (1.9) • If true then move randomly (1.10) (1.6) < (1.7) < (1.8) < (1.9) < (1.10)

  6. Modified sequence • If carrying samples and at the base then drop samples (1.11) • If carrying samples and not at the base then drop 2 crumbs and travel up gradient (1.12) • If sense crumbs then pick up 1 crumb and travel down gradient (1.13) (1.6) < (1.11) < (1.12) < (1.9) < (1.13) < (1.10)

  7. Advantages & distadvantages Advantages: simplicity, economy, computational tractability, robustness against failure. Disadvantages: • How decision making can be done on non-local information. • How purely reactive agents can be designed that learn from experience. • Relationships between individual behaviors, environments, and overall behaviors are not understandable • It is harder to build agents that contain multiple layers.

  8. Concrete architectures (Section 1.4) We will describe four classes of agents: • Logic based agents • Reactive agents • Belief-desire-intention agents • Layered architectures

  9. Belief-Desire-Intention architecture • Deliberation: what goals we want to achieve. • Means-ends reasoning/analysis: how are we going to achieve these goals. If(conditions) Then{statements}; Else{statements};

  10. Roles of Intentions • Intentions drive means-ends reasoning • Intentions constrain future deliberation. • Intentions persist. • Intentions influence beliefs upon which future practical reasoning is based.

  11. Tradeoff between degree of commitment and reconsideration Rate of change of world:  If  is • low bold agents outperform cautious agents. • high cautious agents outperform bold agents. Different environments require different types of decision strategies.

  12. BDI Architecture

  13. Functions • Options: (Bel)* (Int) (Des) • Filter: (Bel)* (Int)* (Des) (Int) • Execute: (Int)A • Action:PA Current intentions are either previously held intentions or newly adopted options

  14. Concrete architectures (Section 1.4) We will describe four classes of agents: • Logic based agents • Reactive agents • Belief-desire-intention agents • Layered architectures

  15. Layered architecture • Horizontal layering • Vertical layering: • One pass control • Two pass control. • Examples: • Touring machines (Horizotal arch.) • InteRRaP (Vertical layered two pass arch.)

  16. Turing MachinesInnes Ferguson

  17. Layers Reactive : • Reactive layer provides more or less immediate response to changes that occur in environment. • Implemented as set of situation—action rules; like subsumption. • These rules map sensor I/p directly to effector o/p. • Makes reference to agents current state. • Cannot do explicit reasoning about the world. Planning: • Does not generate plans from scratch; employs library of plans called skeletons. Modelling: • Represents various entities in the worlds. • Predicts conflicts between agents and generates new goals to resolve the conflicts

  18. IntRRaPJoerg Mueller

  19. Properties of Layers • Situation recognition: maps KB and current goals to a new set of goals • Goal activation: selects which plans to execute, based on the current plans, goals, and KB of that layer • Bottom up activation • Top down execution

  20. Difference between TM & InteRRaP • KB is in InteRRaP; not in TM. • In TM: each layer directly coupled with I/p and o/p; so a control layer is necessary. In InteRRaPP: layers interact with each other.

  21. Layered vs. unlayered architecture • Layered architecture lacks the conceptual and semantic clarity of unlayered architecture (e.g., logic-based) • But remains the most popular; because layering represents decomposition of functionality

  22. Agent Programming Languages (Section 1.5) • Agent0 Agent-oriented programming [Yoav Shoham, 1990] • Concurrent METATEM Logic formulae [Michael Fisher, 1994]

  23. Agent0: language components • set of initial capabilities, • Set of initial beliefs, • Set of initial commitments (intentions) • Set of commitment rules

  24. Agent0: commitment rules A commitment rule has: • Message condition • Mental condition • Action Rule fires when: • Message condition matches against messages received by agent and • Mental condition matches against beliefs held by agent Action can be private or communicative

  25. Flow control in Agent0

  26. Concurrent METATEM • Each agent is programmed by giving a temporal logic specification. • Agents specification is executed directly to generate its behaviour. • Pi Fi. Is a rule. Each rule is continuously matched against an internal recorded history, if matched rule fires. • If rule fires then commitment is updated to future time part. • Example: Agent X asks Resource Controller(RC) for resource; and RC gives X the resource, after mutual exclusion is performed.

  27. Conclusions Goal of Introduction • What is an agent • Why this is an important area for building flexible autonomous systems Goal of research activities • Theory, design, construction and implementation of intelligent agents

  28. THANK YOU for your attentionLet’s start the discussion!

More Related