Multi agent reinforcement learning in a dynamic environment
This presentation is the property of its rightful owner.
Sponsored Links
1 / 14

Multi-agent Reinforcement Learning in a Dynamic Environment PowerPoint PPT Presentation


  • 93 Views
  • Uploaded on
  • Presentation posted in: General

Multi-agent Reinforcement Learning in a Dynamic Environment. The research goal is to enable multiple agents to learn suitable behaviors in a dynamic environment using reinforcement learning.

Download Presentation

Multi-agent Reinforcement Learning in a Dynamic Environment

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Multi agent reinforcement learning in a dynamic environment

Multi-agent Reinforcement Learningin a Dynamic Environment

  • The research goal is to enable multiple agents to learn suitable behaviors in a dynamic environment using reinforcement learning.

  • We found that this approach could be available to create cooperative behavior among the agents without any prior-knowledge.

Footnote : Work done by Sachiyo Arai, Katia Sycara


Multi agent reinforcement learning in a dynamic environment

Agent

Input

LookUp

Action

E

Environment

State

Table

Action

Environment

Recognizer

Selector

W

S, a

(

)

Learner

Reward

Reinforcement Learning Approach

Feature:

  • The Reward won’t be given immediately after agent’s action.

  • Usually, it will be given only after achieving the goal.

  • This delayed reward is the only clue to agent’s learning.

Overview:

  • TD [Sutton 88], Q-learning [Watkins 92]

    • Agent can estimate a model of state transition probabilities of E(Environment), if E has a fixedstate transition probability (; E is a MDPs) .

  • Profit sharing [Grefensttette 88]

    • Agent can estimate a model of state transition probabilities of E, even though Edoes not have a fixedstate transition probability.

      c.f. Dynamic programming

    • Agent needs to have a perfect model of state transition probabilities of E.


Multi agent reinforcement learning in a dynamic environment

Episode

xt+1

xT

G

xt

x1

wn+1(xt, at)

wn(xt, at) + f (rT,t)

rT

0

0

0

rT

Real Reward of each time step

Assigned Reward of each time step

f

time

0

1

t

t+1

T

Episode:

(s,a1) - (s,a1) - (s,a1) - (s,a2) - (G)

r1 r2 r3 r4

Irrational assignment : (r1+r2+r3) >= r4

e.g.: r1=r2=r3=r4=100   -> W(S,a1)>W(S,a2)

Rational assignment : (r1+r2+r3 ) < r4

e.g :r1=12, r2=25, r3=50,r4=100 -> W(S,a1)<W(S,a2) 

Our Approach : Profit Sharing Plan (PSP)

Usually, Multi-agent’s Environment is non-Markovian.

Because : transition probability from St to St+1 could vary.

Due to : agents’ “concurrent learning” and “perceptual aliasing”.

PSP is Robust against non-Markovian,

Because : PSP does not require the environment to have a fixed transition probability from St to St+1.

f : Reinforcement function for

atemporal credit assignment.

[Rationality Theorem]

to suppress ineffective rules

t

∀t=1,2,….TL∑fj< ft+1

j=0

(L : the number of available actions at each time step.)

Example:

a1

In this environment,

a1 should be reinforced

less than a2

100

S

G

a2

rT: reward at time T (Goal)

Wn : weight of the state-action pair after

nepisodes,

(xt, at) : state and action at time t of

n-th episode.


Multi agent reinforcement learning in a dynamic environment

Initial State

Goal State

2. Pursuit Game

3Hunters and multiple Preys

Torus Triangular World,

Required Agents’ Cooperative work includes Task Scheduling

to capture the preys.

Initial State

First Goal State

Second Goal State

Hunter

Prey

Our Experiments

1. Pursuit Game

4Hunters and 1Prey

Torus Grid World,

Required Agents’ Cooperative work to capture the prey.

3. Neo “Block World” domain

3 groups of evacuees and 3 shelters of varying degree of safety

Grid World,

Required Agents’ Cooperative work includes Conflict Resolution andInformation Sharing to evacuate.


Multi agent reinforcement learning in a dynamic environment

LookUp

Table

W (S, a )

Agent

Agent

Experiment 1 : 4Hunters and 1 Prey Pursuit Game

Objective: To make sure that cooperative behavior is emerged by Profit Sharing.

Hypothesis:

Cooperative Behavior, such as Result sharing, Task sharing and Conflict resolution will be emerged.

Setting : Torus Grid World, Size: 15x15, Sight size of each agent: 5x5.

- Each hunter modifies its own lookup table by PSP independently.

- Hunters and Prey are located randomly at initial state of each episode.

- Hunters learn by PSP and Prey move randomly.

Modeling : Each hunter consists of State Recognizer, Action Selector, LookUp Table,

and PSP module as a learner.

4 Hunters 1 Prey

Agent = Hunter

Input

Action

Action

Selector

State

Recognizer

Reward

Profit Sharing

Agent


Multi agent reinforcement learning in a dynamic environment

Experiment 1 : Results

1. Emerged Behavior

2. Convergence

14

16

14

3

17

15

2

15

1

16

18

16

19

17

1

4

17

1

18

20

22

18

24

23

19

2

19

22

20,21

3

25

20

24

23

4

24,25

23

22

21

25

5

18-25

15-17

6,7

11,

12

10

13

9

14

6

7

8,9

8

1-5

  • 3. Discussion

  • A hunter takes advantage of other hunters as a landmark to capture the prey. (Result sharing)

  • 2. Each hunter plays its own role for capturing the prey.(Task sharing)

  • 3. No deadlock or conflict situation will happen if each hunter follows its own strategy,

  • after learning.(Conflict resolution)

10,11

7

8,9

2

3

2

1

4

5

6-8

6

10,11

12,13

9,10

: Trace of Hunter1

14

5

12

11,12

: Trace of Hunter2

: Trace of Hunter3

13

13

15

4

: Trace of Hunter4

: Trace of Prey

p

3


Multi agent reinforcement learning in a dynamic environment

LookUp

Table

W (S, a )

3 Hunters 2 Prey

1

Agent

Agent

Experiment 2 : 3 Hunters and Multiple Prey Pursuit Game

Objective: To make sure that “Task Scheduling” knowledge is emerged by PSP

in the environment of conjunctive multiple goals.

Which Proverb is true in the Reinforcement learning agents ?

proverb 1.He who runs after two hares will catch neither.

proverb 2. Kill two birds with one stone.

Hypothesis:

If the agent know about location of “prey and other agents”, agent realize proverb 2, but sensory limitation makes them behave like proverb 1.

Setting: Torus Triangular World where 7 triangles are on each edge.

- Sight size ; 5 triangles on each edge, 7 triangles on each edge.

- Prey moves randomly.

- Each hunter modifies its own lookup table by PSP independently.

Modeling : Each hunter consists of State Recognizer, Action Selector, LookUp Table,

and PSP module as a learner.

Agent = Hunter

Input

Action

State

Recognizer

Action

Selector

Reward

Profit Sharing


Multi agent reinforcement learning in a dynamic environment

Experiment 2 : Results

1. Convergence

  • 2. Discussion

  • Without global scheduling mechanism, hunters capture the prey in reasonable order. (e.g. capture closest prey first.)

  • The larger the number of prey in the environment, the more steps are required

  • to capture the 1st prey .

  • Because it is getting more difficult to coordinate decision of each hunter’s target.

  • This facts implies that target of each hunter is scattered. (Proverb 1)

  • The required steps to capture the “last prey” in the multiple prey environment is less than that to capture the “1st prey” in the single prey environment.

    • This facts implies that hunters pursuit multiple prey simultaneously.(Proverb 2)


Multi agent reinforcement learning in a dynamic environment

Neo Domain No.1

Agent

Agent

Experiment 3 : Neo “Block World” domain -No.1-

Objective: To make sure that “Opportunistic” knowledge is emerged by PSP in the environment of disjunctive multiple goals.

When there are more than 1 alternatives to get rewards in the environment,

agent can behave reasonably ?

Hypothesis:

If the agent knows about location of “safe places” correctly, each agent can select the best place to evacuate, but sensory limitation makes them back and forth in confusion.

Setting : Graph World ; Size 15 x 15. Sight size ; 7 x 7.

- 2 groups of evacuees, 2 shelter groups.

- Each group of evacuees learns by PSP independently.

- The groups and Shelters are located randomly

at initial state of each episode.

Input of Group: 7x7 sized agent’s own input; no input sharing.

Output of Group:

{walk-north, walk-south, walk-east, walk-west, stay}

Reward :

-Each group gets a reward only when it moves into the shelter.

- The amount of reward is dependent on the degree of shelter’s safety.

- Shelter has unlimited capacity.

Modeling : Each hunter consists of State Recognizer, Action Selector, LookUp Table,

and PSP module as a learner.


Multi agent reinforcement learning in a dynamic environment

Available path

Safe node

A group of evacuees

Experiment 3 : Results

1. Convergence

Unavailable path

2. Discussion

1. Agents learned so that they could get larger amount of reward. So, if the reward amount of shelter1’s is same as the one of shelter2’s, they learned stochastic policies.

On the other hand, if their amount difference is large, they learned the deterministic policies which seems to be nearly optimal .

2. In the latter case (reward difference is large), the other agent works as a landmark to search the shelter.


Multi agent reinforcement learning in a dynamic environment

Experiment 4 : Neo “Block World” domain

Objective: To make sure of the effects of “Sharing Sensory Information” on the agents’

learning and their behaviors.

Hypothesis:

Sharing their sensory input increases the amount of state spaces,

and the required time to converge.

But the policy of the agents become more optimal than that of agents without sharing information, because it reduces perceptual aliasing problem of the agents.

Setting : Graph World ; Size 15 x 15. Sight size ; 7 x 7.

- 3 groups of evacuees, 3 shelters.

- Each group of evacuees learns by PSP independently.

- The groups are located randomly at initial state of each episode.

Input of Group: 7x7 sized agent’s own input, plus, information from Blackboard.

Output of Group:{walk-north, walk-south, walk-east, walk-west, stay}

Reward :

-Each group gets a reward only when it moves into the shelter.

- The degree of safety is the same for each shelter.

- The rewards are not shared among the agents.

- Shelter has unlimited capacity.


Multi agent reinforcement learning in a dynamic environment

Experiment 4 : Neo “Block World” domain -No.2-

Modeling :

  • Model1 Each hunter consists of State Recognizer, Action Selector, LookUp Table, PSP module as a learner. Agents share the sensory input by means of B.B. , combine them with their own input.

BlackBoard

Other

Agents

Environment

Agent

Observation: Ot

Ot={O1,O2,…,Om}

(t=1,..,T)

LookUp Table

Wnml(O, a )

Size m*l

Action: at

At={a1,a2,…,al}

(t=1,..,T)

State Recognizer

Action Selector

Profit Sharing

f (Rn, Oj)

(j=1,..,T)

Reward

Rn(t=T)n


Multi agent reinforcement learning in a dynamic environment

Experiment 4 : Results

1. Convergence

2. Discussion

1. In the Initial Stage: The required steps to shelter of a Non-sharing-Agent reduces faster than that of a Sharing-Agent.Non-sharing-Agent seems to be able to generalize the state and behave rationally even in inexperienced state. On the other hand, Sharing-Agent needs to experience discriminated state spaces, the numbers of which is larger than generalized state space. Therefore, it takes longer time to reduce the number of steps than Non-sharing agent does.

2. In the Final Stage: The performance of a Sharing-Agent is better than that of a Non-Sharing-Agent.Non-sharing-Agent seems to overgeneralize the spaces and to be confused by aliases. On the other hand, Sharing-Agent seems to refine the policy successfully and hard to be confused.


Multi agent reinforcement learning in a dynamic environment

Conclusion

  • Agent learns suitable behaviors in a dynamic environment including multiple agents and goals, if there are no aliasing due to the sensory limitation, concurrent learning of other agents, and the existence of multiple sources of reward.

  • The strict division of the state space causes the state explosion and the worse performance in the early stage of learning.

Future Works

  • Development of the structured mechanism of Reinforcement Learning .

    Hypothesis : Structured mechanism facilitates knowledge transfer.

    • Agent learns knowledge about appropriate generalization level of the

      state spaces.

    • Agent learns knowledge about appropriate amount of communication with others.

  • Competitive Learning

    • Agents compete for resources.

  • We need to resolve structural credit assignment problem.


  • Login