slide1
Download
Skip this Video
Download Presentation
Distributed Stochastic Optimization via Correlated Scheduling

Loading in 2 Seconds...

play fullscreen
1 / 27

Distributed Stochastic Optimization via Correlated Scheduling - PowerPoint PPT Presentation


  • 102 Views
  • Uploaded on

Distributed Stochastic Optimization via Correlated Scheduling. 1. Fusion Center. Observation ω 1 (t). 1. Observation ω 2 (t). 2. Michael J. Neely University of Southern California http://www-bcf.usc.edu/~mjneely. Distributed sensor reports. 2. ω 1 (t). 1. Fusion Center.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Distributed Stochastic Optimization via Correlated Scheduling' - keaira


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Distributed Stochastic Optimization

via Correlated Scheduling

1

Fusion

Center

Observation ω1(t)

1

Observation ω2(t)

2

Michael J. Neely

University of Southern California

http://www-bcf.usc.edu/~mjneely

distributed sensor reports
Distributed sensor reports

2

ω1(t)

1

Fusion

Center

ω2(t)

2

  • ωi(t) = 0/1 if sensor i observes the event on slot t
  • Pi(t) = 0/1 if sensor i reports on slot t
  • Utility: U(t) = min[P1(t)ω1(t) + (1/2)P2(t)ω2(t),1]

Redundant reports do not increase utility.

distributed sensor reports1
Distributed sensor reports

3

ω1(t)

1

Fusion

Center

ω2(t)

2

  • ωi(t) = 0/1 if sensor i observes the event on slot t
  • Pi(t) = 0/1 if sensor i reports on slot t
  • Utility: U(t) = min[P1(t)ω1(t) + (1/2)P2(t)ω2(t),1]

Maximize: U

Subject to: P1 ≤ c

P2 ≤ c

main ideas for this example
Main ideas for this example

4

  • Utility function is non-separable.
  • Redundant reports do not bring extra utility.
  • A centralized algorithm would never send redundant reports (it wastes power).
  • A distributed algorithm faces these challenges:
    • Sensor 2 does not know if sensor 1 observed an event.
    • Sensor 2 does not know if sensor 1 reported anything.
assumed structure
Assumed structure

5

Agree

on plan

t

0

1

2

4

3

Coordinate on a plan before time 0.

Distributively implement plan after time 0.

example plans
Example “plans”

6

Agree

on plan

t

0

1

2

4

3

  • Example plan:
  • Sensor 1:
  • t=even  Do not report.
  • t=odd  Report if ω1(t)=1.
  • Sensor 2:
  • t=even  Report with probp if ω2(t)=1
  • t=odd:  Do not report.
common source of randomness
Common source of randomness

7

Day 1

Day 2

  • Example: 1 slot = 1 day
  • Each person looks at Boston Globe every day:
  • If first letter is a “T”  Plan 1
  • If first letter is an “S”  Plan 2
  • Etc.
specific example
Specific example

8

  • Assume:
  • Pr[ω1(t)=1] = ¾, Pr[ω2(t)=1] = ½
  • ω1(t), ω2(t)independent
  • Power constraint c = 1/3
  • Approach 1: Independent reporting
  • If ω1(t)=1, sensor 1 reports with probability θ1
  • If ω2(t)=1, sensor 2 reports with probabilityθ2
  • Optimizing θ1, θ2 gives u = 4/9 ≈ 0.44444
approach 2 correlated reporting
Approach 2: Correlated reporting

9

  • Pure strategy 1:
  • Sensor 1 reports if and only if ω1(t)=1.
  • Sensor 2 does not report.
  • Pure strategy 2:
  • Sensor 1 does not report.
  • Sensor 2 reports if and only if ω2(t)=1.
  • Pure strategy 3:
  • Sensor 1 reports if and only if ω1(t)=1.
  • Sensor 2 reports if and only if ω2(t)=1.
approach 2 correlated reporting1
Approach 2: Correlated reporting

10

  • X(t) = iid random variable (commonly known):
    • Pr[X(t)=1] = θ1
    • Pr[X(t)=2] = θ2
    • Pr[X(t)=3] = θ3
  • On slot t:
    • Sensors observe X(t)
    • If X(t)=k, sensors use pure strategy k.

Optimizing θ1, θ2, θ3 gives u = 23/48 ≈ 0.47917

summary of approaches
Summary of approaches

11

u

Strategy

Independent reporting

Correlated reporting

Centralized reporting

0.44444

0.47917

0.5

summary of approaches1
Summary of approaches

12

u

Strategy

Independent reporting

Correlated reporting

Centralized reporting

0.44444

0.47917

0.5

It can be shown that this is optimal over all

distributed strategies!

general distributed optimization
General distributed optimization

13

Maximize: U

Subject to: Pk ≤ c for k in {1, …, K}

ω(t) = (ω1(t), …, ωΝ(t))

π(ω) = Pr[ω(t) = (ω1, …, ωΝ)]

α(t) = (α1(t), …, αΝ(t))

U(t) = u(α(t), ω(t))

Pk(t) = pk(α(t), ω(t))

pure strategies
Pure strategies

14

A pure strategy is a deterministic vector-valued function:

g(ω) = (g1(ω1), g2(ω2), …, gΝ(ωΝ))

Let M = # pure strategies:

M = |A1||Ω1| x |A2||Ω2| x ... x|AN||ΩN|

optimality theorem
Optimality Theorem

15

  • There exist:
  • K+1 pure strategies g(m)(ω)
  • Probabilities θ1, θ2, …, θK+1
  • such that the following distributed algorithm is optimal:
  • X(t) = iid, Pr[X(t)=m] = θm
  • Each user observes X(t)
  • If X(t)=m  use strategy g(m)(ω).
lp and complexity reduction
LP and complexity reduction

16

  • The probabilities can be found by an LP
  • Unfortunately, the LP has M variables
  • If (ω1(t), …, ωΝ(t)) are mutually independent and the utility function satisfies a preferred action property, complexity can be reduced
  • Example N=2 users, |A1|=|A2|=2
    • --Old complexity = 2|Ω1|+|Ω2|
    • --New complexity = (|Ω1|+1)(|Ω2|+1)
discussion of theorem 1
Discussion of Theorem 1

17

  • Theorem 1 solves the problem for distributed scheduling, but:
    • Requires an offline LP to be solved before time 0.
    • Requires full knowledge of π(ω) probabilities.
online dynamic approach
Online Dynamic Approach

18

  • We want an algorithm that:
    • Operates online
    • Does not need π(ω) probabilities.
    • Can adapt when these probabilities change.
  • Such an algorithm must use feedback:
    • Assume feedback is a fixed delay D.
    • Assume D>1.
    • Such feedback cannot improve average utility beyond the distributed optimum.
lyapunov optimization approach
Lyapunov optimization approach

19

  • Define K virtual queues Q1(t), …, QK(t).
  • Every slot t, observe queues and choose strategy m in {1, …, M} to maximize a weighted sum of queues.
  • Update queues with delayed feedback:
  • Qk(t+1) = max[Qk(t) + Pk(t-D) - c, 0]
lyapunov optimization approach1
Lyapunov optimization approach

20

  • Define K virtual queues Q1(t), …, QK(t).
  • Every slot t, observe queues and choose strategy m in {1, …, M} to maximize a weighted sum of queues.
  • Update queues with delayed feedback:
  • Qk(t+1) = max[Qk(t) + Pk(t-D) - c, 0]

“service”

“arrivals”

Virtual queue: If stable, then:

Time average power ≤ c.

separable problems
Separable problems

21

  • If the utility and penalty functions are a separable sum of functions of individual variables (αn(t), ωn(t)), then:
  • There is no optimality gap between centralized and distributed algorithms
  • Problem complexity reduces from exponential to linear.
simulation non separable problem
Simulation (non-separable problem)

22

  • 3-user problem
  • αn(t) in {0, 1} for n ={1, 2, 3}.
  • ωn(t) in {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
  • V=1/ε
  • Get O(ε) guarantee to optimality
  • Convergence time depends on 1/ε
utility versus v parameter v 1
Utility versus V parameter (V=1/ε)

23

Utility

V (recall V = 1/ε)

average power versus time
Average power versus time

24

V=100

V=50

Average power up to time t

V=10

power constraint 1/3

Time t

adaptation to non ergodic changes1
Adaptation to non-ergodic changes

26

Optimal utility for phase 2

Optimal utility for phases 1 and 3

Oscillates about the average constraint c

conclusions
Conclusions

27

  • Paper introduces correlated scheduling via common source of randomness.
  • Common source of randomness is crucial for optimality in a distributed setting.
  • Optimality gap between distributed and centralized problems (gap=0 for separable problems).
  • Complexity reduction technique in paper.
  • Online implementation via Lyapunov optimization.
  • Online algorithm adapts to a changing environment.
ad