1 / 46

Distributed Control in Multi-agent Systems: Design and Analysis

Distributed Control in Multi-agent Systems: Design and Analysis. Kristina Lerman Aram Galstyan Information Sciences Institute University of Southern California. Design of Multi-Agent Systems. Multi-agent systems must function in Dynamic environments Unreliable communication channels

moriah
Download Presentation

Distributed Control in Multi-agent Systems: Design and Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Control in Multi-agent Systems: Design and Analysis Kristina Lerman Aram Galstyan Information Sciences Institute University of Southern California

  2. Design of Multi-Agent Systems Multi-agent systems must function in Dynamic environments Unreliable communication channels Large systems Solution Simple agents No reasoning, planning, negotiation Distributed control No central authority

  3. Advantages of Distributed Control • Robust • tolerant of agent error and failure • Reliable • good performance in dynamic environments with unreliable communication channels • Scalable • performance does not depend on the number of agents or task size • Analyzable • amenable to quantitative analysis

  4. Analysis of Multi-Agent Systems Tools to study behavior of multi-agent systems • Experiments • Costly, time consuming to set up and run • Grounded simulations: e.g., sensor-based simulations of robots • Time consuming for large systems • Numerical approaches • Microscopic models, numeric simulations • Analytical approaches • Macroscopic mathematical models • Predict dynamics and long term behavior • Get insight into system design • Parameters to optimize system performance • Prevent instability, etc.

  5. DC: Two Approaches and Analyses • Biologically-inspired approach • Local interactions among many simple agents leads to desirable collective behavior • Mathematical models describe collective dynamics of the system • Markov-based systems • Application: collaboration, foraging in robots • Market-based approach • Adaptation via iterative games • Numeric simulations • Application: dynamic resource allocation

  6. Biologically-Inspired Control

  7. Analysis of Collective Behavior Bio control modeled on social insects • complex collective behavior arises in simple, locally interacting agents Individual agent behavior is unpredictable • external forces – may not be anticipated • noise – fluctuations and random events • other agents – with complex trajectories • probabilistic controllers – e.g. avoidance Collective behavior described probabilistically

  8. Some Terms Defined • State - labels a set of agent behaviors • e.g., for robots Search State = {Wander, Detect Objects, Avoid Obstacles} • finite number of states • each agent is in exactly one of the states • Probability distribution • = probability system is in configuration nat time t • where Ni is number of agents in the i’ th of Lstates

  9. Markov Systems • Markov property: configuration at time t+Dtdepends only on configuration at time t • also, • change in probability density:

  10. Stochastic Master Equation In the continuum limit, with transition rates

  11. Rate Equation Derive the Rate Equation from the Master Eqn • describes how the average number of agents in state k changes in time • Macroscopic dynamical model

  12. Collaboration in Robots

  13. Stick-Pulling Experiments (Ijspeert, Martinoli & Billard, 2001) • Collaboration in a group of reactive robots • Task completed only through collaboration • Experiments with 2 – 6 Khepera robots • Minimalist robot controller A. Ijspeert et al.

  14. Experimental Results • Key observations • Different dynamics for different ratio of robots to sticks • Optimal gripping time parameter

  15. State diagram for a multi-robot system Flowchart of robot’s controller Ijspeert et al. look for sticks start N search object detected? Y u s Y obstacle? obstacle avoidance grip N Y gripped? N success grip & wait Y time out? N release N Y teammate help?

  16. Model Variables • Macroscopic dynamic variables Ns(t)= number of robots in search state at time t Ng(t)= number of robots gripping state at time t M(t)= number of uncollected sticks at time t • Parameters • connect the model to the real system a= rate of encountering a stick aRG= rate of encountering a gripping robot t= gripping time

  17. find & grip sticks successful collaboration unsuccessful collaboration for static environment Initial conditions: Mathematical Model of Collaboration

  18. Dimensional Analysis • Rewrite equations in dimensionless form by making the following transformations: • only the parameters b and t appear in the eqns and determine the behavior of solutions • Collaboration rate • rate at which robots pull sticks out

  19. Searching Robots vs Time t=5 b=0.5

  20. b=1.5 b=1.0 b=0.5 Collaboration Rate vs t Key observations • critical b • optimal gripping time parameter

  21. b=1.5 b=1.0 b=0.5 Comparison to Experimental Results Ijspeert et al.

  22. Summary of Results • Analyzed the system mathematically • importance of b • analytic expression for bc and topt • superlinear performance • Agreement with experimental data and simulations

  23. Foraging in Robots

  24. Robot Foraging • Collect objects scattered in the arena and assemble them at a “home” location • Single vs group of robots • no collaboration • benefits of a group • robust to individual failure • group can speed up collection • But, increased interference Goldberg & Matarić

  25. Interference & Collision Avoidance • Collision avoidance • Interference effects • robot working alone is more efficient • larger groups experience more interference • optimal group size: beyond some group size, interference outweighs the benefits of the group’s increased robustness and parallelism

  26. searching homing avoiding avoiding State Diagram look for pucks start object detected? obstacle? avoid obstacle grab puck go home

  27. Model Variables • Macroscopic dynamic variables Ns(t)= number of robots in search state at time t Nh(t)= number of robots in homing state at time t Nsav(t), Nhav(t) = number of avoiding robots at time t M(t)= number of undelivered pucks at time t • Parameters ar= rate of encountering a robot ap= rate of encountering a puck t= avoiding time th0= homing time in the absence of interference

  28. Average homing time: Mathematical Model of Foraging Initial conditions:

  29. Searching Robots and Pucks vs Time robots pucks

  30. Group Efficiency vs Group Size t=1 t=5

  31. Sensor-Based Simulations Player/Stage simulator number of robots = 1 - 10 number of pucks = 20 arena radius = 3 m home radius = 0.75 m robot radius = 0.2 m robot speed = 30 cm/s puck radius = 0.05 m rev. hom. time = 10 s

  32. Simulations Results

  33. Simulations Results

  34. Summary • Biologically inspired mechanisms are feasible for distributed control in multi-agent systems • Methodology for creating mathematical models of collective behavior of MAS • Rate equations • Model and analysis of robotic systems • Collaboration, foraging • Future directions • Generalized Markov systems – integrating learning, memory, decision making

  35. Market-Based Control

  36. Distributed Resource Allocation • N agents use a set of M common resources with limited, time dependent capacity LM(t) • At each time step the agents decide whether to use the resource m or not • Objective is to minimize the waste • where Am(t) is the number of agents utilizing resource m

  37. Minority Games • N agents repeatedly choose between two alternatives (labeled 0 and 1), and those in the minority group are rewarded • Each agent has a set of S strategies that prescribe a certain action given the last m outcomes of the game (memory) strategy with m=3 input action • Reinforce strategies that predicted the winning group • Play the strategy that has predicted the winning side most often

  38. Coordinated phase For some memory size the waste is smaller than in the random choice game MG as a Complex System • Let be the size of the group that chooses ”1” at time t • The “waste” of the resource is measured by the standard deviation • - average over time • In the default Random Choice Game (agents take either action with probability ½) , the standard deviation is

  39. Variations of MG • MG with local information • Instead of global history agents may use local interactions (e.g., cellular automata) • MG with arbitrary capacities • The winning choice is “1” if where is the capacity, is the number of agents that chose “1” To what degree agents (and the system as a whole) can coordinate in externally changing environment?

  40. Global measure for optimality: For the RChG (each agent chooses “1” with probability ) MG on Kauffman Networks • Set of N Boolean agents: Each agent has • A set of K neighbors • A set of S randomly chosen Boolean functions of K variables • Dynamics is given by • The winning choice is “1” if where

  41. Traditional MG m=6 K=2 Simulation Results K=2 networks show a tendency towards self-organization into a coordinated phase characterized by small fluctuations and effective resource utilization

  42. Results (continued) Coordination occurs even in the presence of vastly different time scales in the environmental dynamics

  43. Scalability For K=2 the “variance” per agent is almost independent on the group size, In the absence of coordination

  44. K=3 Phase Transitions in Kauffman Nets Kauffman Nets: phase transition at K=2 separating ordered (K<2) and chaotic (K>2) phases For K>2 one can arrive at the phase transition by tuning the homogeneity parameter P (the fraction of 0’s or 1’s in the output of the Boolean functions) The coordinated phase might be related to the phase transition in Kauffman Nets.

  45. Summary of Results • Generalized Minority Games on K=2 Kauffman Nets are highly adaptive and can serve as a mechanism for distributed resource allocation • In the coordinated phase the system is highly scalable • The adaptation occurs even in the presence of different time scales, and without the agents explicitly coordinating or knowing the resource capacity • For K>2 similar coordination emerges in the vicinity of the ordered/chaotic phase transitions in the corresponding Kauffman Nets

  46. Conclusion • Biologically-inspired and market-based mechanisms are feasible models for distributed control in multi-agent systems • Collaboration and foraging in robots • Resource allocation in a dynamic environment • Studied both mechanisms quantitatively • Analytical model of collective dynamics • Numeric simulations of adaptive behavior

More Related