1 / 31

CMSC 691M Agent Architectures & Multi-Agent Systems

CMSC 691M Agent Architectures & Multi-Agent Systems. Spring 2002 – February 26 Class #9 – Formal Methods for MAS Prof. Marie desJardins. Today’s overview. Reading: Weiss Chap. 8, “Formal Methods in DAI” (Munindar P. Singh, Anand S. Rao, and Michael P. Georgeff) Why use formal methods?

burksg
Download Presentation

CMSC 691M Agent Architectures & Multi-Agent Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMSC 691MAgent Architectures & Multi-Agent Systems Spring 2002 – February 26 Class #9 – Formal Methods for MAS Prof. Marie desJardins

  2. Today’s overview • Reading: Weiss Chap. 8, “Formal Methods in DAI” (Munindar P. Singh, Anand S. Rao, and Michael P. Georgeff) • Why use formal methods? • Classes of logics • Beliefs, desires, and intentions • Implementing BDI models • Coordinating BDI agents • Communicating BDI agents • Societies of BDI agents

  3. Benefits Specify complex behavior at an abstract level Validate agent behaviors Disadvantages Intractable in general case Limiting precisely because of formalism and abstract representation Why use formal methods? • Specify properties of agents declaratively • Provide reasoning mechanisms for agents

  4. What’s to be modeled? First-order logic Propositional logic • To design and implement intelligent agents, we may need to be able to reason about the truth of propositions and relations between objects in the world, to reason about what may or must be true, to reason about how the agent’s actions affect the state of the world, and to reason about how other agents and external agents change the world over time. Modal logic Temporal logic Dynamic logic

  5. Classes of logics

  6. Propositional logic • L = true atomic propositions • P is entailed iff P  L • P  Q is entailed iff P and Q are entailed • P is entailed iff P is not entailed

  7. Predicate (first-order) logic • x (Q(x)) is entailed iff Q(l) holds for every object l • x (Q(x)) is entailed iff Q(l) holds for some object l

  8. Modal logic • Possible worlds semantics • Accessibility relation R(W1, W2): • Possibility: P is entailed in world w iff P is true in some possible world ( w’: R(w,w’)  P is entailed in w’) • Necessity: □P is entailed in w iff P is true in every possible world ( w’: R(w,w’) → P is entailed in w’)

  9. Dynamic logic (“modal logic of action”) • Sequencing: a;b – do a, then do b • Choice: a+b – do either a or b nondeterministically • Testing: p? –TRUE if p, FALSE if p • ((q?;a) + ((q?;b)) ≡ if q then a else b • Accessibility relation RA – reachability of worlds via (composite) action A

  10. Dynamic logic – modeling outcomes • Possible outcomes • <A>P is entailed in w iff P is entailed in some world reachable by applying action A • <A>P  w’: RA(w,w’)  P entailed in w’ • Necessary outcomes • [A]P is entailed in w iff P is entailed in all worlds reachable by applying action A • [A]P w’: RA(w,w’) → P entailed in w’

  11. Temporal logic – variations • Linear vs. branching: modeling a single sequence of events/outcomes vs. modeling a branching series of alternative possible worlds • Discrete vs. dense(continuous): time treated as discrete intervals vs. continuously flowing • Moment-based (point) vs. period-based (interval): units of time treated as points or intervals

  12. Discrete moment-based branching temporal logic • Moment in time are partially ordered • Each moment is associated with a possible world • The actions of multiple agents can influence which moment (possible world) occurs next

  13. Linear temporal logic • p  q at moment t means that p holds from t until t’ and q holds at t’ • X p at moment t means that p holds in the moment immediately following t • P p at moment t means that p was true at t’ where t’ is before t • F p at moment t means that p is true at some moment t’ after t • G p at moment t means that p is true at every moment t’ after t

  14. Branching temporal logic • “The present moment” • “Reality” • A p means that p is true in all paths at the present moment (i.e., no matter what may have gone before or will happen in the future, p is true now) – temporal equivalent of the necessity operator of modal logic • E p means that p is true in some path at the present moment – temporal equivalent of possibility operator

  15. Branching temporal logic II • X<a>p is true (at a particular moment, on a particular path) iff p is a possible outcome of agent x performing action a • X[a]p is true (at a particular moment, on a particular path) iff p is a necessary outcome of agent x performing action a • V a : p is true (at a moment and on a path) iff there is some action a under which p becomes true

  16. Brief commentary on logics • Branching temporal logic is very powerful, and has been used to develop planners and other agent architectures • Many researchers use ideas from some or all of these logics in their agent designs and representations • Very few researchers use the full formal specification of these logics in building systems (though it isn’t uncommon to see them in conference and journal papers)

  17. Beliefs, desires, and intentions • Use modal logic to model agent’s cognitive attitudes: beliefs, desires, goals, know-how, and intentions

  18. Beliefs • X Bel p iff p is entailed in every possible world the agent believes it can be in (modeled by the B accessibility relation) • Interestingly, although a proposition q may be believed by this definition, an agent may not believe that it believes q • Limited rationality / limited computational resources means that the agent can’t derive everything that it “believes”

  19. Desires • x Des p iff p holds in all possible worlds reachability by the D accessibility relation • The agent might not know how to reach the states it desires to be in • An agent can desire to be in conflicting states • Goals are the subset of the agent’s desires that are achievable and consistent

  20. Intentions • x Int p iff p is true along all paths that are reachable by the I accessibility relation • According to this definition, an agent can “intend” something it doesn’t desire • An agent can also have an unsatisfiable intention (if the set of reachable paths is empty) • An agent can intend something, and yet fail to make it come true (if it proceeds along a path that isn’t in its set of intended paths) • Know-how models when an agent can guarantee the success of its actions • More useful might be to model when an agent might be able to guarantee the success of its actions

  21. Commitments • Agents that persist with their intentions (as long as they are satisfiable) are said to be committed to those intentions • The concept of a commitment is very useful in modeling societies of agents

  22. Basic interpreter basic-interpreter initialize-state(); do options := option-generator (event-queue, S); selected-options := deliberate (options, S); update-state (selected-options, S); execute (S); event-queue := get-new-events(); until quit. internal state percepts “intentions”

  23. BDI interpreter Beliefs, desires (goals), and intentions BDI-interpreter initialize-state(); do options := option-gen (event-queue, B, G, I); selected-options := deliberate (options, B, G, I); update-intentions (selected-options, I); execute (I); event-queue := get-new-events(); drop-successful-attitudes (B, G, I); drop-impossible-attitudes (B, G, I); until quit. satisfied or unrealizable beliefs, goals, and intentions

  24. Issues in implementation • Updating the BDI structures is intractable in the general case • Use only explicit beliefs and goals • Represent beliefs, goals and intentions as plan structures that are followed by the agent • Support means-ends reasoning • Hierarchically structured

  25. Coordinating BDI agents • Model actions of the agents in terms of how they can be affected by other agents’ preferences • Flexible actions can be delayed or omitted • Inevitable actions can be delayed but not omitted • Immediate actions can be neither delayed nor omitted • Triggerable actions can be performed at the request of another agent • Use a finite state automaton (skeleton) to model the state transitions of the agent

  26. Coordination relationships • Model the relationships between two agents’ events • Is-required-by • Disables • Enables • Conditionally enables • (guaranteeing enablement) • Initiates • Jointly-required-by • Compensates-for-failure

  27. Communicating BDI agents • Performative: speech act that is itself an action • Informing • Requesting • Promising • Permitting • Prohibiting • Declaring • Expressing

  28. Communicating: Ontologies • Ontology – representation of objects and relationships in the world • Not quite the same as a knowledge base • An ontology is typically the “representational” part of a knowledge base… • …but sometimes axioms and rules are in an ontology

  29. Societies of BDI agents • Groups of agents interact in some way • Agents may have different roles within the group • The agents may be heterogeneous or homogeneous • Teams of agents share (some) common goals

  30. Mutual BDI • Mutual beliefs • Everyone believes p, believes that the others believe p, believes that the others believe … • Impossible to achieve perfect mutual information in environments where communication can fail: “We attack at dawn” • Joint intentions • Everyone intends p; everyone will persist with p until achieved or impossible • Shared plans: intending-to and intending-that • Social commitments: promises and persistence

  31. A few notes on grammar • “Punctuation always goes inside a quote.” • “That” is used to define; “which” is used to clarify or extend • The system that Weiss describes is …(“that” tells which system I’m talking about) • The PRS system, which Georgeff et al. developed, …(“which” tells more about the only system in question) • Useful references: • Strunk and White, Elements of Style • DuPre, Bugs in Writing • Chicago Manual of Style

More Related