Loading in 2 Seconds...

Distributed Control in Multi-agent Systems: Design and Analysis

Loading in 2 Seconds...

- 178 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' Distributed Control in Multi-agent Systems: Design and Analysis' - moriah

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Distributed Control in Multi-agent Systems: Design and Analysis

Kristina Lerman

Aram Galstyan

Information Sciences Institute

University of Southern California

Design of Multi-Agent Systems

Multi-agent systems must function in

Dynamic environments

Unreliable communication channels

Large systems

Solution

Simple agents

No reasoning, planning, negotiation

Distributed control

No central authority

Advantages of Distributed Control

- Robust
- tolerant of agent error and failure
- Reliable
- good performance in dynamic environments with unreliable communication channels
- Scalable
- performance does not depend on the number of agents or task size
- Analyzable
- amenable to quantitative analysis

Analysis of Multi-Agent Systems

Tools to study behavior of multi-agent systems

- Experiments
- Costly, time consuming to set up and run
- Grounded simulations: e.g., sensor-based simulations of robots
- Time consuming for large systems
- Numerical approaches
- Microscopic models, numeric simulations
- Analytical approaches
- Macroscopic mathematical models
- Predict dynamics and long term behavior
- Get insight into system design
- Parameters to optimize system performance
- Prevent instability, etc.

DC: Two Approaches and Analyses

- Biologically-inspired approach
- Local interactions among many simple agents leads to desirable collective behavior
- Mathematical models describe collective dynamics of the system
- Markov-based systems
- Application: collaboration, foraging in robots
- Market-based approach
- Adaptation via iterative games
- Numeric simulations
- Application: dynamic resource allocation

Analysis of Collective Behavior

Bio control modeled on social insects

- complex collective behavior arises in simple, locally interacting agents

Individual agent behavior is unpredictable

- external forces – may not be anticipated
- noise – fluctuations and random events
- other agents – with complex trajectories
- probabilistic controllers – e.g. avoidance

Collective behavior described probabilistically

Some Terms Defined

- State - labels a set of agent behaviors
- e.g., for robots Search State = {Wander, Detect Objects, Avoid Obstacles}
- finite number of states
- each agent is in exactly one of the states
- Probability distribution
- = probability system is in configuration nat time t
- where Ni is number of agents in the i’ th of Lstates

Markov Systems

- Markov property: configuration at time t+Dtdepends only on configuration at time t
- also,
- change in probability density:

Rate Equation

Derive the Rate Equation from the Master Eqn

- describes how the average number of agents in state k changes in time
- Macroscopic dynamical model

Stick-Pulling Experiments (Ijspeert, Martinoli & Billard, 2001)

- Collaboration in a group of reactive robots
- Task completed only through collaboration
- Experiments with 2 – 6 Khepera robots
- Minimalist robot controller

A. Ijspeert et al.

Experimental Results

- Key observations
- Different dynamics for different ratio of robots to sticks
- Optimal gripping time parameter

multi-robot system

Flowchart of robot’s controller

Ijspeert et al.

look for sticks

start

N

search

object

detected?

Y

u

s

Y

obstacle?

obstacle

avoidance

grip

N

Y

gripped?

N

success

grip & wait

Y

time out?

N

release

N

Y

teammate

help?

Model Variables

- Macroscopic dynamic variables

Ns(t)= number of robots in search state at time t

Ng(t)= number of robots gripping state at time t

M(t)= number of uncollected sticks at time t

- Parameters
- connect the model to the real system

a= rate of encountering a stick

aRG= rate of encountering a gripping robot

t= gripping time

successful collaboration

unsuccessful collaboration

for static environment

Initial conditions:

Mathematical Model of CollaborationDimensional Analysis

- Rewrite equations in dimensionless form by making the following transformations:
- only the parameters b and t appear in the eqns and determine the behavior of solutions
- Collaboration rate
- rate at which robots pull sticks out

Summary of Results

- Analyzed the system mathematically
- importance of b
- analytic expression for bc and topt
- superlinear performance
- Agreement with experimental data and simulations

Robot Foraging

- Collect objects scattered in the arena and assemble them at a “home” location
- Single vs group of robots
- no collaboration
- benefits of a group
- robust to individual failure
- group can speed up collection
- But, increased interference

Goldberg & Matarić

Interference & Collision Avoidance

- Collision avoidance
- Interference effects
- robot working alone is more efficient
- larger groups experience more interference
- optimal group size: beyond some group size, interference outweighs the benefits of the group’s increased robustness and parallelism

homing

avoiding

avoiding

State Diagramlook for pucks

start

object

detected?

obstacle?

avoid

obstacle

grab puck

go home

Model Variables

- Macroscopic dynamic variables

Ns(t)= number of robots in search state at time t

Nh(t)= number of robots in homing state at time t

Nsav(t), Nhav(t) = number of avoiding robots at time t

M(t)= number of undelivered pucks at time t

- Parameters

ar= rate of encountering a robot

ap= rate of encountering a puck

t= avoiding time

th0= homing time in the absence of interference

Sensor-Based Simulations

Player/Stage simulator

number of robots = 1 - 10

number of pucks = 20

arena radius = 3 m

home radius = 0.75 m

robot radius = 0.2 m

robot speed = 30 cm/s

puck radius = 0.05 m

rev. hom. time = 10 s

Summary

- Biologically inspired mechanisms are feasible for distributed control in multi-agent systems
- Methodology for creating mathematical models of collective behavior of MAS
- Rate equations
- Model and analysis of robotic systems
- Collaboration, foraging
- Future directions
- Generalized Markov systems – integrating learning, memory, decision making

Distributed Resource Allocation

- N agents use a set of M common resources with limited, time dependent capacity LM(t)
- At each time step the agents decide whether to use the resource m or not
- Objective is to minimize the waste
- where Am(t) is the number of agents utilizing resource m

Minority Games

- N agents repeatedly choose between two alternatives (labeled 0 and 1), and those in the minority group are rewarded
- Each agent has a set of S strategies that prescribe a certain action given the last m outcomes of the game (memory)

strategy with m=3

input

action

- Reinforce strategies that predicted the winning group
- Play the strategy that has predicted the winning side most often

For some memory size the waste is smaller than in the random choice game

MG as a Complex System

- Let be the size of the group that chooses ”1” at time t
- The “waste” of the resource is measured by the standard deviation
- - average over time
- In the default Random Choice Game (agents take either action with probability ½) , the standard deviation is

- MG with local information
- Instead of global history agents may use local interactions (e.g., cellular automata)
- MG with arbitrary capacities
- The winning choice is “1” if where is the capacity, is the number of agents that chose “1”

To what degree agents (and the system as a whole) can coordinate in externally changing environment?

Global measure for optimality:

For the RChG (each agent chooses “1” with probability )

MG on Kauffman Networks- Set of N Boolean agents:

Each agent has

- A set of K neighbors
- A set of S randomly chosen Boolean functions of K variables

- Dynamics is given by

- The winning choice is “1” if where

K=2

Simulation ResultsK=2 networks show a tendency towards self-organization into a coordinated phase characterized by small fluctuations and effective resource utilization

Results (continued)

Coordination occurs even in the presence of vastly different time scales in the environmental dynamics

Scalability

For K=2 the “variance” per agent is almost independent on the group size,

In the absence of coordination

Phase Transitions in Kauffman Nets

Kauffman Nets: phase transition at K=2 separating ordered (K<2) and chaotic (K>2) phases

For K>2 one can arrive at the phase transition by tuning the homogeneity parameter P (the fraction of 0’s or 1’s in the output of the Boolean functions)

The coordinated phase might be related to the phase transition in Kauffman Nets.

Summary of Results

- Generalized Minority Games on K=2 Kauffman Nets are highly adaptive and can serve as a mechanism for distributed resource allocation
- In the coordinated phase the system is highly scalable
- The adaptation occurs even in the presence of different time scales, and without the agents explicitly coordinating or knowing the resource capacity
- For K>2 similar coordination emerges in the vicinity of the ordered/chaotic phase transitions in the corresponding Kauffman Nets

Conclusion

- Biologically-inspired and market-based mechanisms are feasible models for distributed control in multi-agent systems
- Collaboration and foraging in robots
- Resource allocation in a dynamic environment
- Studied both mechanisms quantitatively
- Analytical model of collective dynamics
- Numeric simulations of adaptive behavior

Download Presentation

Connecting to Server..