1 / 56

Introduction Agent Programming

Introduction Agent Programming. Koen Hindriks Delft University of Technology, The Netherlands Learning to program teaches you how to think. Computer science is a liberal art. Steve Jobs. Outline. Previous Lecture, last lecture on Prolog: “Input & Output” Negation as failure Search

aman
Download Presentation

Introduction Agent Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IntroductionAgent Programming Koen Hindriks Delft University of Technology, The Netherlands Learning to program teaches you how to think. Computer science is a liberal art. Steve Jobs

  2. Outline • Previous Lecture, last lecture on Prolog: • “Input & Output” • Negation as failure • Search • Coming lectures: • Agents that use Prolog • This lecture: • Agent Introduction • “Hello World” example in the GOAL agent programming language

  3. Agents: Act in environments Percepts Choose an action agent environment Action

  4. Agents: Act to achieve goals Percepts events agent environment actions goals Action

  5. Agents: Represent environment Percepts agent events beliefs environment actions goals Action plans

  6. Agent Oriented Programming • Agents provide a very effective way of building applications for dynamic and complex environments + • Develop agents based on Belief-Desire-Intention agent metaphor, i.e. develop software components as if they have beliefs and goals, act to achieve these goals, and are able to interact with their environment and other agents.

  7. A Brief History of AOP • 1990: AGENT-0 (Shoham) • 1993: PLACA (Thomas; AGENT-0 extension with plans) • 1996: AgentSpeak(L) (Rao; inspired by PRS) • 1996: Golog (Reiter, Levesque, Lesperance) • 1997: 3APL (Hindriks et al.) • 1998: ConGolog (Giacomo, Levesque, Lesperance) • 2000: JACK (Busetta, Howden, Ronnquist, Hodgson) • 2000: GOAL (Hindriks et al.) • 2000: CLAIM (Amal El FallahSeghrouchni) • 2002: Jason (Bordini, Hubner; implementation of AgentSpeak) • 2003: Jadex (Braubach, Pokahr, Lamersdorf) • 2008: 2APL (successor of 3APL) This overview is far from complete!

  8. A Brief History of AOP • AGENT-0 Speech acts • PLACA Plans • AgentSpeak(L) Events/Intentions • Golog Action theories, logical specification • 3APL Practical reasoning rules • JACK Capabilities, Java-based • GOAL Declarative goals • CLAIM Mobile agents (within agent community) • Jason AgentSpeak + Communication • Jadex JADE + BDI • 2APL Modules, PG-rules, …

  9. Outline • Some of the more actively being developed APLs • 2APL (Utrecht, Netherlands) • Agent Factory (Dublin, Ireland) • Goal (Delft, Netherlands) • Jason (Porto Alegre, Brasil) • Jadex (Hamburg, Germany) • JACK (Melbourne, Australia) • JIAC (Berlin, Germany) • References

  10. 2APL – Features 2APL is a rule-based language for programming BDI agents: • actions: belief updates, send, adopt, drop, external actions • beliefs: represent the agent’s beliefs • goals: represents what the agent wants • plans: sequence, while, if then • PG-rules: goal handling rules • PC-rules: event handling rules • PR-rules: plan repair rules

  11. 2APL – Code Snippet Beliefs: worker(w1), worker(w2), worker(w3) Goals:findGold() andhaveGold() Plans: = { send( w3, play(explorer) ); } Rules = { … goal handling rule G( findGold() ) <-B( -gold(_) && worker(A) && -assigned(_, A) ) | send( A, play(explorer) ); ModOwnBel( assigned(_, A) ); , E( receive( A, gold(POS) ) ) |B( worker(A) ) -> event handling rule { ModOwnBel( gold(POS) ); }, E( receive( A, done(POS) ) ) |B( worker(A) ) -> explicit operator for events { ModOwnBel( -assigned(POS, A), -gold(POS) ); }, … } modules to combine and structure rules

  12. JACK – Features The JACK agent Language is built on top of and extends Java and provides the following features: • agents: used to define the overall behavior of mas • beliefset: represents an agent’s beliefs • view: allows to perform queries on belief sets • capability: reusable functional component made up of plans, events, belief sets and other capabilities • plan: instructions the agent follows to try to achieve its goals and handle events • event: occurrence to which agent should respond

  13. JACK – Agent Template agentAgentType extends Agent { // Knowledge bases used by the agent are declared here. #private dataBeliefTypebelief_name(arg_list); // Events handled, posted and sent by the agent are declared here. #handles eventEventType; #posts eventEventTypereference; used to create internal events #sends eventEventTypereference; used to send messages to other agents // Plans used by the agent are declared here. Order is important. #uses planPlanType; // Capabilities that the agent has are declared here. #has capabilityCapabilityType reference; // other Data Member and Method definitions }

  14. Jason – Features • beliefs: weak and strong negationto support both closed-world assumption and open-world • belief annotations: label information source, e.g. self, percept • events: internal, messages, percepts • a library of “internal actions”, e.g. send • user-defined internal actions: programmed in Java. • automatic handling of plan failures • annotations on plan labels: used to select a plan • speech-act based inter-agent communication • Java-based customization: (plan) selection functions, trust functions, perception, belief-revision, agent communication

  15. Jason – Plans triggering event test on beliefs plan body

  16. Summary Key language elements of APLs: • beliefs and goals to represent environment • events received from environment (& internal) • actions to update beliefs, adopt goals, send messages, act in environment • plans, capabilities & modules to structure action • rules to select actions/plans/modules/capabilities • support for multi-agent systems

  17. How are these APLs related? A comparison from a high-level, conceptual point, not taking into account any practical aspects (IDE, available docs, speed, applications, etc) Multi-Agent Systems All of these languages (except AGENT-0, PLACA, JACK) have versions implemented “on top of” JADE. Family of Languages Basic concepts: beliefs, action, plans, goals-to-do): AgentSpeak(L)1, Jason2 AGENT-01 (PLACA ) = = Prolog-based Golog = 3APL3 Main addition: Declarative goals 2APL≈ 3APL + Goal Java-basedBDI Languages Agent Factory, Jack (commercial), Jadex, JIAC MobileAgents CLAIM, AgentScape 1 mainly interesting from a historical point of view 2 from a conceptual point of view, we identify AgentSpeak(L) and Jason 3 without practical reasoning rules

  18. References Websites • 2APL: http://www.cs.uu.nl/2apl/ • Agent Factory: http://www.agentfactory.com • Goal: http://mmi.tudelft.nl/trac/goal • JACK: http://www.agent-software.com.au/products/jack/ • Jadex: http://jadex.informatik.uni-hamburg.de/ • Jason: http://jason.sourceforge.net/ • JIAC: http://www.jiac.de/ Books • Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2005 Multi-Agent Programming Languages, Platforms and Applications. presents 3APL, CLAIM, Jadex, Jason • Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2009, Multi-Agent Programming: Languages, Tools and Applications. presents a.o.: Brahms, CArtAgO, Goal, JIAC Agent Platform

  19. The GoalAgent Programming Language

  20. The Hello World example of Agent Programming The Blocks World

  21. The Blocks World A classic AI planning problem. Objective: Move blocks in initial state such that result is goal state. • Positioning of blocks on table is not relevant. • A block can be moved only if it there is no other block on top of it.

  22. Representing the Blocks World Prolog is the knowledge representation language used in Goal. Basic predicate: • on(X,Y). Defined predicates: • tower([X]) :- on(X,table). tower([X,Y|T) :- on(X,Y),tower([Y|T]). • clear(X) :- block(X), not(on(Y,X)). • clear(table). • block(X) :- on(X, _). EXERCISE:

  23. Representing the Initial State Using the on(X,Y) predicate we can represent the initial state. beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } Initial belief base of agent

  24. Representing the Blocks World • What about the rules we defined before? • Add clauses that do not change into the knowledge base. • tower([X]) :- on(X,table). • tower([X,Y|T]) :- on(X,Y),tower([Y|T]). • clear(X) :- block(X), not(on(Y,X)). clear(table). • block(X) :- on(X, _). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } Static knowledge base of agent

  25. Why a Separate Knowledge Base? • Concepts defined in knowledge base can be used in combination with both the belief and goal base. • Example • Since agent believeson(e,table),on(d,e), then infer: agent believestower([d,e]). • If agent wantson(a,table),on(b,a), then infer: agent wantstower([b,a]). • Knowledge base introduced to avoid duplicating clauses in belief and goal base.

  26. Representing the Goal State Using the on(X,Y) predicate we can represent the goal state. goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } Initial goal base of agent

  27. One or Many Goals In the goal base using the comma- or period-separator makes a difference! goals{ on(a,table). on(b,a). on(c,b). } goals{ on(a,table), on(b,a), on(c,b). }  • Left goal base has three goals, right goal base has single goal. • Moving c on top of b (3rd goal), c to the table, a to the table (2nd goal) , and b on top of a (1st goal) achieves all three goals but not single goal of right goal base. • The reason is that the goal base on the left does not require block c to be on b, b to be on a, and a to be on the table at the same time.

  28. Mental State of Goal Agent The knowledge, belief, and goal sections together constitute the specification of the Mental State of a Goal Agent. knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } Initial mental state of agent

  29. Inspecting the Belief & Goal base • Operator bel()to inspect the belief base. • Operator goal()to inspect the goal base. • Where  is a Prolog conjunction of literals. • Examples: • bel(clear(a), not(on(a,c))). • goal(tower([a,b])).

  30. Inspecting the Belief Base • bel() succeeds if  follows from the belief base in combination with the knowledge base. • Example: • bel(clear(a), not(on(a,c))) succeeds • Condition  is evaluated as a Prolog query. knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). }

  31. Inspecting the Belief Base EXERCISE: Which of the following succeed? • bel(on(b,c), not(on(a,c))). • bel(on(X,table), on(Y,X), not(clear(Y)). • bel(tower([X,b,d]). [X=c;Y=b] knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). }

  32. Inspecting the Goal Base Use the goal(…) operator to inspect the goal base. • goal() succeeds if  follows from one of the goals in the goal base in combination with the knowledge base. • Example: • goal(clear(a))succeeds. • but notgoal(clear(a),clear(c)). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). }

  33. Inspecting the Goal Base EXERCISE: Which of the following succeed? • goal(on(b,table), not(on(d,c))). • goal(on(X,table), on(Y,X), clear(Y)). • goal(tower([d,X]). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). }

  34. Negation and Beliefs EXERCISE: not(bel(on(a,c))) = bel(not(on(a,c)))? • Answer: Yes. • Because Prolog implements negation as failure. • If φ cannot be derived, then not(φ) can be derived. • We always have: not(bel()) = bel(not()) knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). }

  35. Negation and Goals EXERCISE: not(goal())= goal(not())? • Answer: No. • We have, for example: goal(on(a,b)) and goal(not(on(a,b))). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,b), on(b,table). on(a,c), on(c,table). }

  36. Combining Beliefs and Goals • Achievement goals: • a-goal() = goal(), not(bel()) • Agent only has an achievement goal if it does not believe the goal has been reached already. • Goal achieved: • goal-a() = goal(), bel() • A (sub)-goal  has been achieved if the agent believes in . Useful to combine the bel(…) and goal(…) operators.

  37. Expressing BW Concepts Mental States Possible to express key Blocks World concepts by means of basic operators. • Define: block X is misplaced • Solution: goal(tower([X|T])),not(bel(tower([X|T]))). • But this means that saying that a block is misplaced is saying that you have an achievement goal: a-goal(tower([X|T])). EXERCISE:

  38. Changing Blocks World Configurations Actions specifications

  39. Actions Change the Environment… move(a,d)

  40. and Require Updating Mental States. • To ensure adequate beliefs after performing an action the belief base needs to be updated (and possibly the goal base). • Addeffects to belief base: insert on(a,d) after move(a,d). • Delete old beliefs: delete on(a,b)after move(a,d).

  41. and Require Updating Mental States. • If a goal has been (believed to be) completely achieved, the goal is removed from the goal base. • It is not rationalto have a goal you believe to be achieved. • Default update implements a blind commitment strategy. move(a,b) beliefs{ on(a,table), on(b,table). } goals{ on(a,b), on(b,table). } beliefs{ on(a,b), on(b,table). } goals{ }

  42. Action Specifications • Actions in GOAL have preconditions and postconditions. • Executing an action in GOAL means: • Preconditions are conditions that need to be true: • Check preconditions on the belief base. • Postconditions (effects) are add/delete lists (STRIPS): • Add positive literals in the postcondition • Delete negative literals in the postcondition • STRIPS-style specification move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) ) } post { not(on(X,Z)), on(X,Y) } }

  43. Actions Specifications move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) )} post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) • Check: clear(a), clear(b), on(a,Z), not( on(a,b) ) • Remove: on(a,Z) • Add: on(a,b) Note: first remove, then add. table

  44. Actions Specifications move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) beliefs{ on(a,table), on(b,table). } beliefs{ on(b,table). on(a,b). }

  45. Actions Specifications EXERCISE: move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } • Is it possible to perform move(a,b)? • Is it possible to perform move(a,d)? knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } No, not( on(a,b) ) fails. Yes.

  46. Selecting actions to perform Action Rules

  47. Agent-Oriented Programming • How do humans choose and/or explain actions? • Examples: • I believe it rains; so, I will take an umbrella with me. • I go to the video store because I want to rent I-robot. • I don’t believe busses run today so I take the train. • Use intuitive common sense concepts: beliefs + goals => action See Chapter 1 of the Programming Guide

  48. Selecting Actions: Action Rules • Action rules are used to define a strategy for action selection. • Defining a strategy for blocks world: • If constructive move can be made, make it. • If block is misplaced, move it to table. • What happens: • Check condition, e.g. can a-goal(tower([X|T]))be derived given current mental state of agent? • Yes, then (potentially) select move(X,table). program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

  49. Order of Action Rules • Action rules are executed by default in linear order. • The first rule that fires is executed. • Default order can be changed to random. • Arbitrary rule that is able to fire may be selected. program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). } program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

  50. Example Program: Action Rules Agent program may allow for multiple action choices program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). } Random, arbitrary choice d To table

More Related