1 / 41

Design of Multi-Agent Systems

Design of Multi-Agent Systems. Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site http://www.ai.rug.nl/~verheij/teaching/dmas/ (Nestor contains a link). Overview. Agents, multi-agent systems This course Views of the field Objections

pwesley
Download Presentation

Design of Multi-Agent Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design of Multi-Agent Systems • Teacher • Bart Verheij • Student assistants • Albert Hankel • Elske van der Vaart • Web site • http://www.ai.rug.nl/~verheij/teaching/dmas/ • (Nestor contains a link)

  2. Overview • Agents, multi-agent systems • This course • Views of the field • Objections • Agents & objects • Intentional systems • Abstract architecture

  3. Russell & Norvig • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors.

  4. Michael Wooldridge • An agent is a computer system that is situated in some environment and that is capable of autonomous actions in order to meet its design objectives.

  5. Some agents • Mars Path Finder • Air traffic control • Personal digital assistant • P2p file sharing • Game agents

  6. A natural kinds taxonomy of agent • (Franklin and Graesser)

  7. Multi-agent systems • A multi-agent system is one that consists of a number of agents, which interact with one-another • This requires the ability to cooperate, coordinate, and negotiate with each other

  8. Multi-agent systems • How can cooperation emerge in societies of self-interested agents? • What kinds of languages can agents use to communicate? • How can self-interested agents recognize conflict, and how can they (nevertheless) reach agreement? • How can autonomous agents coordinate their activities so as to cooperatively achieve goals?

  9. Reactivity • A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful)

  10. Proactiveness • A proactive system is one that generates goals and attempts to achieve them by taking initiatives and recognizing opportunities

  11. Balancing reactive and goal-oriented behavior • Timely response • to changing conditions • Systematically working • towards long-term goals

  12. Influences and inspiration • Economics • Philosophy • Game Theory • Logic • Ecology • Social Sciences

  13. Overview • Agents, multi-agent systems • This course • Views of the field • Objections • Agents & objects • Intentional systems • Abstract architecture

  14. This course: examination • 50% Exam about the course book • Wooldridge’s An Introduction to Multiagent Systems • chs 1-4, 6-11 • 25% Programming exercises • To be submitted to dmas@gmail.com using certain naming conventions • 25% A presentation

  15. This course: schedule

  16. This course: time investment • 5 ECTS = 140 hours • 140 hours/10 weeks = 14 hours per week 6 contact hours (2 hours lecture, 2 hours presentations, 2 hours computer lab) 4 hours self study 2 hours presentation (10 hours of study, 4 uur slide design / 7 weeks) 2 hours programming (so 4 programming hours per week when the lab session is included)

  17. Overview • Agents, multi-agent systems • This course • Views of the field • Objections • Agents & objects • Intentional systems • Abstract architecture

  18. Some views of the Field • Multi-agent systems as a paradigm for software engineering • Interaction is probably the most important single characteristic of complex software • Multi-agent systems as a tool for understanding human societiesSocial simulation, “theories of the mind” • Multi-agent systems as a search for appropriate theoretical foundations“Neat” vs “scruffy”; theory vs engineering

  19. Overview • Agents, multi-agent systems • This course • Views of the field • Objections • Agents & objects • Intentional systems • Abstract architecture

  20. Objections to MAS • Isn’t it all just distributed/concurrent systems? • Agents can be self-interested, so their interactions are “economic” encounters. There is no global goal. • Isn’t it all just AI? • Agents may not need much intelligence • Classical AI ignored social aspects of agency.

  21. Objections to MAS • Isn’t it all just economics/game theory? • These fields ignored computational constraints and resource-bounded decision making • Isn’t it all just social science? • Actual societies may not be optimal

  22. Overview • Agents, multi-agent systems • This course • Views of the field • Objections • Agents & objects • Intentional systems • Abstract architecture

  23. Agents and objects • Are agents just objects by another name? • Object: • encapsulates some state • communicates via message passing • has methods, corresponding to operations that may be performed on this state

  24. Agents and objects • Main differences: • Agents are autonomous:they decide for themselves whether or not to perform an action on request from another agent • Agents are smart:they are capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior • Agents are active:a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control

  25. Objects do it for free… • Agents do it because they want to • Agents do it for money

  26. Overview • Agents, multi-agent systems • This course • Views of the field • Objections • Agents & objects • Intentional systems • Abstract architecture

  27. Agents as intentional systems • Dennett: an intentional system is an entity ‘whose behavior can be predicted by the method of attributing belief, desires and rational acumen’ • McCarthy: ‘Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is incompletely known’

  28. Agents as intentional systems • For more complex systems, we need more powerful abstractions and metaphors to explain their operation — low level explanations become impractical. • The intentional stance is such an abstraction.

  29. Overview • Agents, multi-agent systems • This course • Views of the field • Objections • Agents & objects • Intentional systems • Abstract architecture

  30. Abstract architecture for agents • Environment states: • Actions of agents: • A run:

  31. Agents, environments, systems • An environment is a triple Env =E,e0, where: Eis a set of environment states, e0 E is the initial state, and  is a state transformer function • An agent is a function which maps runs to actions: • A system is a pair Env, Ag.

  32. Some variations • A deterministic environment: • A reactive agent: • Ag:E → Ac • A state transition function that is independent of history: • T : EAc(E)

  33. Agents with perception see action Agent Environment

  34. Agents with internal states Agent see action state next Environment

  35. Agents with tasks • Utility functions can be used to tell an agent what to do without telling how to do it • The task of the agent is to bring about states that maximize utility

  36. Utility in the Tileworld

  37. Optimal agents • P(r | Ag, Env) denotes the probability of run r for agent Ag and environment Env • An agent is optimal when it maximizes expected utility

  38. Task specification using predicates • Predicates Ψ: R → {0, 1} can be used for task specification: • Ψ(r) = 1 expresses that an agent has succeeded, Ψ(r) = 0 that that it has not. • An agent succeeds in a task environment (Env, Ψ) when

  39. Types of tasks • Achievement tasks • Achieve a state of affairs Reach a state in a set of goal states G: Ψ(r) if and only if r contains a state in G • Maintenance tasks • Maintain a state of affairs Avoid a set of failure states B: Ψ(r) if and only if r does not contain a state in B

  40. Overview • Agents, multi-agent systems • This course • Views of the field • Objections • Agents & objects • Intentional systems • Abstract architecture

More Related