1 / 42

Stochastic Models for Communication Networks

Stochastic Models for Communication Networks. Jean Walrand University of California, Berkeley. Contents (tentative). Big Picture Store-and-Forward Packets, Transactions, Users Queuing Little’s Law and Applications Stability of Markov Chains Scheduling Markov Decision Problems

kailey
Download Presentation

Stochastic Models for Communication Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

  2. Contents (tentative) • Big Picture • Store-and-Forward • Packets, Transactions, Users • Queuing • Little’s Law and Applications • Stability of Markov Chains • Scheduling • Markov Decision Problems • Discrete Time • DT: LP Formulation • Continuous Time • Transaction-Level Models • Models of TCP • Stability • Random Networks • Connectivity Results: Percolations

  3. Big Picture • Store-and-Forward

  4. Big Picture • Packets • Delay • Backlog • Transactions • Bit rate • Duration • Users

  5. Queuing • Little’s Law • Stability of Markov Chains

  6. Little’s Law • Roughly:Average number in system = Average arrival rate x Average time in systemL = lW • Example: 1000 packet arrivals/second each packet spends 0.1 second in system 100 packets in system, on average • Notes: • System does not have to be FIFO nor work-conserving; Applies to subset of customers • True under weak assumptions (stability)

  7. Little’s Law… • Extension:Average income per unit time = Average arrival rate x Average cost per customerH = lG • Example: 1000 customer arrivals/second each customer spends $5.00 on average system gets $5,000.00 per second, on average

  8. Little’s Law… L = lW H = lG • Illustration 1: UtilizationPackets arrive at rate l (per second)Average transmission time = 1/m (seconds) Transmitter is busy a fraction l/m of the time • Indeed: System = transmitter Number in system = 1 if busy, 0 otherwise W = Average time in system = 1/m L = Average number in system = fraction of time busyHence, Fraction of time busy = l/m =: link utilization

  9. Little’s Law… L = lW H = lG • Illustration 2: Delay in M/G/1 queuePackets arrive at rate l (per second)Transmission time = S; E(S) = 1/m; var(S) = s2; r = l/m  • Indeed:H = E(queuing delay) = E(Q) [see *]G = E(SQ + S2/2) = E(S)E(Q) + E(S2)/2 where Q = queuing delayThus, E(Q) = l{E(S)E(Q) + E(S2)/2}We solve for E(Q), then add E(S) to get the expected delay

  10. Little’s Law… • Illustration 2: Delay in M/G/1 queue… * We used the fact that a typical customer sees the typical delay in the system… This is true because the arrivals are Poisson. Indeed, P[ queue is in state x at t | arrival in (t, t + e)] = P(queue is in state x at time t) because state at time t = f(past of Poisson process) and process has independent increments • Note: Not true if not Poisson

  11. Little’s Law… • Illustration 2: Delay in M/G/1 queue… • M/M/1: s2 = 1/m2  • M/D/1: s2 = 0  • s2 >> 1  Average delay is very large

  12. Little’s Law… • Illustration 3: Delay in M/G/1 queue with vacationsModel: Server goes on vacations every time the queue is empty. The vacations are i.i.d. and distributed like V.Then,where T0 = average delay without vacations. • Derivation:Assume servers pays U when his residual vacation is U and each customer pays as in M/G/1 queue. Then, the total expected pay is the average waiting time until service. This is lE(QS + S2/2) + aE(V2/2) = E(Q)where a = rate of vacations.To find a, note that the server is idle for E(V)at = (1 – r)t seconds out of t >> 1 seconds. Hence, a = (1 –r)/E(V).Substituting gives the result.

  13. 1 2 Little’s Law… S D Latest bit seen by time t at point 1 at point 2 n Delay of bit n

  14. T(N) 1 2 N T(N - 1) S D X(t) T(1) + … + T(N) N 1 S . X(t)dt = = T T N T Little’s Law… S = area S = T(1) + … + T(N) = integral of X(t) T  Average occupancy = (average delay)x(average arrival rate)

  15. Stability of Markov Chains • Markov Chain (DT): • Assume irreducible. Then • all states are NR, PR, or T together (certainly PR if finite) • there is 0 or 1 Inv. Dist.: pP = p; 1 if PR, 0 otherwise • Moreover, for all state i, one has almost surely • Finally, if PR and aperiodic, then

  16. Stability of Markov Chains… • Pakes’ Lemma: • Assume irreducible and aperiodic on {0, 1, 2, …}. DefineAssume there is some i0 and some a so that • Then, the MC is PR • Proof: The MC cannot stay away from {0, 1, …, i0}; If it does for k steps, E(Xn) decreases by ka … .Formally:

  17. Stability of Markov Chains… • Pakes’ Lemma … simple variation: • Pakes’ Lemma … other variation: Same conclusion if there is some finite m such that has the properties indicated above.

  18. Stability of Markov Chains… • Application 1: (Inspired by TCP) First note that X(n) is irreducible and aperiodic. Also, the original form of Pakes’ Lemma applies.

  19. Stability of Markov Chains… • Application 2: (Inspired by switches) Virtual Output Buffer switch: l(1) l(2) l(3) l(4) Think of Bernoulli arrivals, cells of size 1 … . Note l(1) + l(2) < 1 and l(3) + l(4) < 1. At each time: X or =. Stability requires l(1) + l(3) < 1 and l(2) + l(4) < 1 .Maximum throughput scheduling: If this condition suffices.

  20. Stability of Markov Chains… • Application 2… Maximum Weighted Matching: A = 14 B = 11 C = 15 D = 10 B + C > A + D => Serve (B, C)

  21. Stability of Markov Chains… • Application 2 (Tassiulas, McKeown et al) Maximum Weighted Matching  maximum throughput. Proof: V(x) = ||x||2 is a Lyapunov function That is, E[V(X(n+1) – V(X(n)) | X(n) = x] is finite and < - e < 0 for x outside a finite set.

  22. A = 14 B = 8 C = 15 D = 10 Stability of Markov Chains… • Application 3 Iterated Longest Queue: Serve longest queue, then next longest that is compatible, etc…. Here: C, B

  23. Stability of Markov Chains… • Application 3 (A. Dimakis) Iterated Longest Queue  maximum throughput. Proof: V(x) = maxi(xi) is a Lyapunov function.

  24. Conflict Graph: L2 L2 Interference Radius L1 L1 L3 L3 Stability of Markov Chains… • Application 4: Wireless

  25. Stability of Markov Chains… • Possible matches: {1},{2,3} • Maximum Weight Matching: {2,3} • Longest Queue First (LQF): {1} Resources: j2 J Classes: k2 K λ1 conflict graph λ2 2 1 3 λ3

  26. conflict graph 2 1 3 λ2 λ1 λ3 Stability of Markov Chains… Iterated Longest Queue  maximum throughput . [Dimakis] • As in previous example: • Consider fluid limits; • Use longest queue size as Lyapunov function. • Carries over to conflict graph topologies that strictly include trees. • Example:

  27. 1 2 6 5 3 4 Stability of Markov Chains… Iterated Longest Queue  maximum throughput. • Nominal stability cnd: li+li+1<1. • One fluid limit is unstable for li>3/7* • Stochastic system is stable for almost all feasible l’s (A. Dimakis) * Big match is picked 2/3 times in fluid limit when queues are equal, which happens after 3 steps after a small match and 2 steps after a big match.

  28. Two distinct phases: one stable, the other unstable. Stability of Markov Chains… Metastability of ILQ in 8-Ring (Antonios Dimakis) 1 2 • Possible matches:{1,3,5,7}, {2,4,6,8} {1,4,6}, {2,5,7},… 8 3 7 4 6 5

  29. Key idea: Order of processing matters Example: Two jobs 1 and 2, processing times X1 = 1 and X2 = 10 1, 2  1 ends at T1 = 1, 2 at T2 = 11, so sum of waiting times = T1 + T2 = 12 2, 1  T2 = 10, T1 = 11, so T1 + T2 = 21 X1 X2 X2 X1 T1 T2 T2 T1 Scheduling • For a set of jobs, shortest job first (SRPT) minimizes sum of waiting times • For random processing times, shortest expected first (SEPT) • The interesting case is when preemption is allowed and when new jobs arrive. • We explore preemption next.

  30. Example: Two jobs 1 and 2processing times X1 = 1 w.p. 0.9, 11 w.p. 0.1, X2 = 2 1, 2  E(T1 + T2) = E(X1 +X1 +X2) = 6 2, 1  E(T2 + T1) = E(X2 +X2 +X1) = 6 1, switch to 2 if X1 > 1, complete 1  E(T1 + T2) = 5.2[T1 + T2= 1 + 3 w.p. 0.9 and 3 + 13 w.p. 0.1.] Scheduling …

  31. Other Example: Jobs 1 and 2 have random processing times; can only interruptonce a stage is completed: Scheduling … • Question: How to schedule to minimize E(sum of completion times)? • Intuition: Maximize expected rate of progress

  32. Scheduling … • One more example: Thus, one should stop once one reaches a state with a lower n(.). Hence, the highest value of n(.) is achieved in a single step.

  33. Scheduling … Here, the max. n is The second-highest max. n is n(j) for some j in {a, b} and it is achieved by the first time that the state leaves {j, c}. Hence, this n is max{a(a), a(b)} where It follows that the state with the second highest index is a and n(a) = 0.117. Finally, we get

  34. Scheduling … • Another example: Jobs with increasing hazard rates. Consider a set of jobs with a service time X distributed in {1, 2, …} with the property that the hazard rate h(.) defined by Assume also that one can switch job after each step. Then the optimal Policy is to serve the jobs exhaustively one by one. Proof: We show, by induction on n, that n(n) is achieved by the completion time of the job. Assume that this is true for n > m. Then

  35. Scheduling … Claim: It is optimum to process job with highest n(.) first: INDEX RULE. Proof:Interchange argument. Consider the following interchange step: t(1) s(2), s(3), …, s(k)

  36. Objective: How to make decisions in face of uncertainty Example: Guessing next card Example: Serving queues Discrete-Time Formulation Continuous-Time Formulation Linear Programming Formulation Markov Decision Problems

  37. Decision in the face of uncertainty Should you carry an umbrella? Should get vaccinated against the flu? Take another card at blackjack? Buy a lottery ticket Fill up at the next gas station? Guess that the transmitter sent a 1? Take the prelims next time? Stay on for a PhD? Marry my boyfriend? Choose another advisor? Drop out of this silly course? …. Markov Decision Problems

  38. Example (from S. Ross “Introduction to Stochastic Dynamic Programming”) A perfectly shuffled deck of 52 cards Guess once if next card is an ace: + $10.00 if correct; $ 0.00 if not. Can decide when to bet on the next card. You see the cards as they are being turned over. Optimal policy: Any guess is correct as long as there is still an ace in the deck. Proof: Let V(n, m) be the best chance of guessing correctly if there are still n cards with m aces in the deck (0 <= m <= n). Then V satisfies the following Dynamic Programming Equations: Markov Decision Problems

  39. DISCRETE TIME: X(n) = DTMC with P(i, j, a), a in A(i) Objective: Minimize E{c(X(1), a(1)) + … + C(X(N), a(N))} Solution: Let V(i, n) = min E[c(X(1), a(1)) + … + C(X(N), a(n))|X(1) = i] Then V(i, n) = min{c(i, a) + SjP(i, j, a)V(j, n – 1)}; V(., 0) = 0where the minimum is over a in A(i). Moreover, the optimum action a = g(x, n) is the minimizing value. Markov Decision Problems • Example V(1, n) = min{1 + a2 + (1 – a)V(1, n – 1), a in [0, 1]} • V(1, 1) = min{1 + a2} = 1, g(1, 1) = 0 • V(1, 2) = min{1 + a2 + (1 – a)1} = 7/4, g(1, 2) = ½ • V(1, 3) = min{1 + a2 + (1 – a)7/4} = 127/64, g(1, 3) = 7/8, …

  40. DISCRETE TIME: Average cost X(n) = DTMC with P(i, j, a), a in A(i) Objective: Minimize E{c(X, a(X)) } where X is invariant under P(i, j, a(i)) Solution: Roughly, V(i, n) = nV + h(i), so thatnV + h(i) = min{c(i, a) + (n – 1)V + SjP(i, j, a)h(j)}Hence, V + h(i) = min{c(i, a) + SjP(i, j, a)h(j)} Markov Decision Problems

  41. DISCRETE TIME: Average cost V + h(i) = min{c(i, a) + SjP(i, j, a)h(j)} Markov Decision Problems • Example • V + h(1) = min{1 + ar + (1 – a)h(1) + ah(0), a in [0, 1]}V + h(0) = 0 + h(1)  V = h(1) – h(0) • V + h(1) = min{1 + h(1), 1 + r + h(0)}, a = 0 or a = 1 accordingly • h(1) – h(0) = min{1, 1 + r + h(0) – h(1)} • h(1) – h(0) = min{1, r} = V • a = 1 if r < 1 and a = 0 if r > 1

  42. DISCRETE TIME: Average Cost - Linear Programming Markov Decision Problems

More Related