Multi agent systems overview and research directions
This presentation is the property of its rightful owner.
Sponsored Links
1 / 41

Multi-Agent Systems: Overview and Research Directions PowerPoint PPT Presentation


  • 108 Views
  • Uploaded on
  • Presentation posted in: General

Multi-Agent Systems: Overview and Research Directions. CMSC 471 December 1 and 3, 2009 Prof. Marie desJardins. Outline. What’s an Agent? Multi-Agent Systems Cooperative multi-agent systems Competitive multi-agent systems MAS Research Directions Organizational structures

Download Presentation

Multi-Agent Systems: Overview and Research Directions

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Multi agent systems overview and research directions

Multi-Agent Systems:Overview and Research Directions

CMSC 471

December 1 and 3, 2009

Prof. Marie desJardins


Outline

Outline

  • What’s an Agent?

  • Multi-Agent Systems

    • Cooperative multi-agent systems

    • Competitive multi-agent systems

  • MAS Research Directions

    • Organizational structures

    • Communication limitations

    • Learning in multi-agent systems


What s an agent

What’s an Agent?


What s an agent1

What’s an agent?

  • Weiss, p. 29 [after Wooldridge and Jennings]:

    • “An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.”

  • Russell and Norvig, p. 7:

    • “An agent is just something that perceives and acts.”

  • Rosenschein and Zlotkin, p. 4:

    • “The more complex the considerations that [a] machine takes into account, the more justified we are in considering our computer an ‘agent,’ who acts as our surrogate in an automated encounter.”


What s an agent ii

What’s an agent? II

  • Ferber, p. 9:

    • “An agent is a physical or virtual entity

      • Which is capable of acting in an environment,

      • Which can communicate directly with other agents,

      • Which is driven by a set of tendencies…,

      • Which possesses resources of its own,

      • Which is capable of perceiving its environment…,

      • Which has only a partial representation of this environment…,

      • Which possesses skills and can offer services,

      • Which may be able to reproduce itself,

      • Whose behavior tends towards satisfying its objectives, taking account of the resources and skills available to it and depending on its perception, its representations and the communications it receives.”


Ok so what s an environment

OK, so what’s an environment?

  • Isn’t any system that has inputs and outputs situated in an environment of sorts?


What s autonomy anyway

What’s autonomy, anyway?

  • Jennings and Wooldridge, p. 4:

    • “[In contrast with objects, we] think of agents as encapsulating behavior, in addition to state. An object does not encapsulate behavior: it has no control over the execution of methods – if an object x invokes a method m on an object y, then y has no control over whether m is executed or not – it just is. In this sense, object y is not autonomous, as it has no control over its own actions…. Because of this distinction, we do not think of agents as invoking methods (actions) on agents – rather, we tend to think of them requesting actions to be performed. The decision about whether to act upon the request lies with the recipient.”

  • Is an if-then-else statement sufficient to create autonomy?


So now what

So now what?

  • If those definitions aren’t useful, is there a useful definition? Should we bother trying to create “agents” at all?


Multi agent systems

Multi-Agent Systems


Multi agent systems1

Multi-agent systems

  • Jennings et al.’s key properties:

    • Situated

    • Autonomous

    • Flexible:

      • Responsive to dynamic environment

      • Pro-active / goal-directed

      • Social interactions with other agents and humans

  • Research questions: How do we design agents to interact effectively to solve a wide range of problems in many different environments?


Aspects of multi agent systems

Aspects of multi-agent systems

  • Cooperative vs. competitive

  • Homogeneous vs. heterogeneous

  • Macro vs. micro

  • Interaction protocols and languages

  • Organizational structure

  • Mechanism design / market economics

  • Learning


Topics in multi agent systems

Topics in multi-agent systems

  • Cooperative MAS:

    • Distributed problem solving: Less autonomy

    • Distributed planning: Models for cooperation and teamwork

  • Competitive or self-interested MAS:

    • Distributed rationality: Voting, auctions

    • Negotiation: Contract nets


Typical cooperative mas domains

Typical (cooperative) MAS domains

  • Distributed sensor network establishment

  • Distributed vehicle monitoring

  • Distributed delivery


Distributed sensing

Distributed sensing

  • Track vehicle movements using multiple sensors

  • Distributed sensor network establishment:

    • Locate sensors to provide the best coverage

    • Centralized vs. distributed solutions

  • Distributed vehicle monitoring:

    • Control sensors and integrate results to track vehicles as they move from one sensor’s “region” to another’s

    • Centralized vs. distributed solutions


Distributed delivery

Distributed delivery

  • Logistics problem: move goods from original locations to destination locations using multiple delivery resources (agents)

  • Dynamic, partially accessible, nondeterministic environment (goals, situation, agent status)

  • Centralized vs. distributed solution


Cooperative multi agent systems

Cooperative Multi-Agent Systems


Distributed problem solving planning

Distributed problem solving/planning

  • Cooperative agents, working together to solve complex problems with local information

  • Partial Global Planning (PGP): A planning-centric distributed architecture

  • SharedPlans: A formal model for joint activity

  • Joint Intentions: Another formal model for joint activity

  • STEAM: Distributed teamwork; influenced by joint intentions and SharedPlans


Distributed problem solving

Distributed problem solving

  • Problem solving in the classical AI sense, distributed among multiple agents

    • That is, formulating a solution/answer to some complex question

    • Agents may be heterogeneous or homogeneous

    • DPS implies that agents must be cooperative (or, if self-interested, then rewarded for working together)


Requirements for cooperative activity

Requirements for cooperative activity

  • (Grosz) -- “Bratman (1992) describes three properties that must be met to have ‘shared cooperative activity’:

    • Mutual responsiveness

    • Commitment to the joint activity

    • Commitment to mutual support”


Joint intentions

Joint intentions

  • Theoretical framework for joint commitments and communication

  • Intention: Commitment to perform an action while in a specified mental state

  • Joint intention: Shared commitment to perform an action while in a specified group mental state

  • Communication: Required/entailed to establish and maintain mutual beliefs and join intentions


Sharedplans

SharedPlans

  • SharedPlan for group action specifies beliefs about how to do an action and subactions

  • Formal model captures intentions and commitments towards the performance of individual and group actions

  • Components of a collaborative plan (p. 5):

    • Mutual belief of a (partial) recipe

    • Individual intentions-to perform the actions

    • Individual intentions-that collaborators succeed in their subactions

    • Individual or collaborative plans for subactions

  • Very similar to joint intentions


Steam now we re getting somewhere

STEAM: Now we’re getting somewhere!

  • Implementation of joint intentions theory

    • Built in Soar framework

    • Applied to three “real” domains

    • Many parallels with SharedPlans

  • General approach:

    • Build up a partial hierarchy of joint intentions

    • Monitor team and individual performance

    • Communicate when need is implied by changing mental state & joint intentions

  • Key extension: Decision-theoretic model of communication selection


Competitive multi agent systems

Competitive Multi-Agent Systems


Distributed rationality

Distributed rationality

  • Techniques to encourage/coax/force self-interested agents to play fairly in the sandbox

  • Voting: Everybody’s opinion counts (but how much?)

  • Auctions: Everybody gets a chance to earn value (but how to do it fairly?)

  • Contract nets: Work goes to the highest bidder

  • Issues:

    • Global utility

    • Fairness

    • Stability

    • Cheating and lying


Pareto optimality

Pareto optimality

  • S is a Pareto-optimal solution iff

    • S’ (x Ux(S’) > Ux(S) → y Uy(S’) < Uy(S))

    • i.e., if X is better off in S’, then some Y must be worse off

  • Social welfare, or global utility, is the sum of all agents’ utility

    • If S maximizes social welfare, it is also Pareto-optimal (but not vice versa)

Which solutions

are Pareto-optimal?

Y’s utility

Which solutions

maximize global utility

(social welfare)?

X’s utility


Stability

Stability

  • If an agent can always maximize its utility with a particular strategy (regardless of other agents’ behavior) then that strategy is dominant

  • A set of agent strategies is in Nash equilibrium if each agent’s strategy Si is locally optimal, given the other agents’ strategies

    • No agent has an incentive to change strategies

    • Hence this set of strategies is locally stable


Prisoner s dilemma

Prisoner’s Dilemma

Let's play!

B

A


Prisoner s dilemma analysis

Prisoner’s Dilemma: Analysis

  • Pareto-optimal and social welfare maximizing solution: Both agents cooperate

  • Dominant strategy and Nash equilibrium: Both agents defect

B

A

  • Why?


Voting

Voting

  • How should we rank the possible outcomes, given individual agents’ preferences (votes)?

  • Six desirable properties (which can’t all simultaneously be satisfied):

    • Every combination of votes should lead to a ranking

    • Every pair of outcomes should have a relative ranking

    • The ranking should be asymmetric and transitive

    • The ranking should be Pareto-optimal

    • Irrelevant alternatives shouldn’t influence the outcome

    • Share the wealth: No agent should always get their way 


Voting protocols

Voting protocols

  • Plurality voting: the outcome with the highest number of votes wins

    • Irrelevant alternatives can change the outcome: The Ross Perot factor

  • Borda voting: Agents’ rankings are used as weights, which are summed across all agents

    • Agents can “spend” high rankings on losing choices, making their remaining votes less influential

  • Binary voting: Agents rank sequential pairs of choices (“elimination voting”)

    • Irrelevant alternatives can still change the outcome

    • Very order-dependent


Auctions

Auctions

  • Many different types and protocols

  • All of the common protocols yield Pareto-optimal outcomes

  • But… Bidders can agree to artificially lower prices in order to cheat the auctioneer

  • What about when the colluders cheat each other?

    • (Now that’s really not playing nicely in the sandbox!)


Contract nets

Contract nets

  • Simple form of negotiation

  • Announce tasks, receive bids, award contracts

  • Many variations: directed contracts, timeouts, bundling of contracts, sharing of contracts, …

  • There are also more sophisticated dialogue-based negotiation models


Mas research directions

MAS Research Directions


Agent organizations

Agent organizations

  • “Large-scale problem solving technologies”

  • Multiple (human and/or artificial) agents

  • Goal-directed (goals may be dynamic and/or conflicting)

  • Affects and is affected by the environment

  • Has knowledge, culture, memories, history, and capabilities (distinct from individual agents)

  • Legal standing is distinct from single agent

  • Q: How are MAS organizations different from human organizations?


Organizational structures

Organizational structures

  • Exploit structure of task decomposition

    • Establish “channels of communication” among agents working on related subtasks

  • Organizational structure:

    • Defines (or describes) roles, responsibilities, and preferences

    • Use to identify control and communication patterns:

      • Who does what for whom: Where to send which task announcements/allocations

      • Who needs to know what: Where to send which partial or complete results


Communication models

Communication models

  • Theoretical models: Speech act theory

  • Practical models:

    • Shared languages like KIF, KQML, DAML

    • Service models like DAML-S

    • Social convention protocols


Communication strategies

Communication strategies

  • Send only relevant results at the right time

    • Conserve bandwidth, network congestion, computational overhead of processing data

    • Push vs. pull

    • Reliability of communication (arrival and latency of messages)

    • Use organizational structures, task decomposition, and/or analysis of each agent’s task to determine relevance


Communication structures

Communication structures

  • Connectivity (network topology) strongly influences the effectiveness of an organization

  • Changes in connectivity over time can impact team performance:

    • Move out of communication range  coordination failures

    • Changes in network structure  reduced (or increased) bandwidth, increased (or reduced) latency


Learning in mas

Learning in MAS

  • Emerging field to investigate how teams of agents can learn individually and as groups

  • Distributed reinforcement learning: Behave as an individual, receive team feedback, and learn to individually contribute to team performance

  • Distributed reinforcement learning: Iteratively allocate “credit” for group performance to individual decisions

  • Genetic algorithms: Evolve a society of agents (survival of the fittest)

  • Strategy learning: In market environments, learn other agents’ strategies


Adaptive organizational dynamics

Adaptive organizational dynamics

  • Potential for change:

    • Change parameters of organization over time

    • That is, change the structures, add/delete/move agents, …

  • Adaptation techniques:

    • Genetic algorithms

    • Neural networks

    • Heuristic search / simulated annealing

    • Design of new processes and procedures

    • Adaptation of individual agents


Conclusions and directions

Conclusions and directions

  • “Agent” means many different things

  • Different types of “multi-agent systems”:

    • Cooperative vs. competitive

    • Heterogeneous vs. homogeneous

    • Micro vs. macro

  • Lots of interesting/open research directions:

    • Effective cooperation strategies

    • “Fair” coordination strategies and protocols

    • Learning in MAS

    • Resource-limited MAS (communication, …)


  • Login