1 / 20

Probabilistic Packet Scheduling (PPS)

Probabilistic Packet Scheduling (PPS). Ming Zhang, Randy Wang, Larry Peterson, Arvind Krishnamurthy Department of Computer Science Princeton University. OS. 900. 300. P1. P2. 1000. 500. 1000. P3. P4. P5. Motivation – Lottery Scheduling.

matteo
Download Presentation

Probabilistic Packet Scheduling (PPS)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic Packet Scheduling (PPS) Ming Zhang, Randy Wang, Larry Peterson, Arvind Krishnamurthy Department of Computer Science Princeton University

  2. OS 900 300 P1 P2 1000 500 1000 P3 P4 P5 Motivation – Lottery Scheduling • OS defines currency and assigns lottery tickets to processes • Processes proportionally divide CPU cycles • A Process can make local CPU allocation decision

  3. S0 S3 S1 S5 S6 S4 S2 Proportional Bandwidth Allocation • Router defines currency in tickets/s and assigns tickets to its inputs • Link maintains currency exchange rate • Bandwidth at bottleneck is proportional to ticket share • Local bandwidth allocation decision and isolation A 10Mb/s 1000t/s 2Mb/s 900t/s 1Mb/s 2000t/s B 10Mb/s 500t/s 1Mb/s 300t/s C 10Mb/s 1000t/s

  4. Algorithm in Brief • TCP source tags tickets on each packet • Each router runs a variant of RED to decide whether to drop or accept a packet • Relabel packets at each link based on currency exchange rate

  5. Ticket Tagging • OutTktRate – t/s assigned to a TCP source • AvgRate - average throughput of a flow • Tag OutTktRate / AvgRate onto each packet • Tickets on packet are inversely proportional to the average throughput

  6. Ticket-based RED (TRED) • InTkt is the tickets on an incoming packet. ExpectTkt is the tickets “should” be on an incoming packet • ExpectTkt is computed as average tickets on all incoming packets • Bottlenecked flows put approximately ExpectTkt tickets on their packets If MinThresh < AvgQLen < MaxThresh compute probability p the same as in RED p’ = p * (ExpectTkt / InTkt)3 drop the packet with probability p’

  7. Exchange Rate • A multi-hop flow may go through many routers • Different routers have their own currencies • Convert tickets between different currencies • Exchange rate at each link XRate = OutTktRate / InTktRate • Relabel packets according to exchange rate OutTkt = InTkt * XRate

  8. Receiver-based Algorithm • Controlling bandwidth allocation at receiver • AckOutTktRate – t/s assigned to an output • Tagging and relabeling of ACKs are similar • Compute OutTktRate from tickets on ACKs OutTktRate = AckInTktRate

  9. 100t/s 200t/s 1 1 3000t/s 2 2 P Q 4.65Mb/s 26ms 30 30 One-Hop Configuration • Simulations are run in NS-2 • Sender-based and receiver-based

  10. One-Hop Results

  11. 100t/s 200t/s 1000t/s 100t/s 200t/s 1000t/s A1 A2 P1 11000t/s S1 1.65Mb/s 26ms A10 P3 P4 S2 B1 5500t/s B2 P2 S20 B10 Multi-Hop Configuration

  12. Multi-Hop Results

  13. Fairly Share Unused Bandwidth

  14. 2000 S0 A 10Mb/s 3600 S8 S4 1.2Mb/s 10 Mb/s S1 1000 10Mb/s B 5Mb/s S6 S7 ? Mb/s C10Mb/s S2 1400 3Mb/s S5 S9 1200 10Mb/s D S3 700 Multiple Bottlenecks Configuration

  15. Multiple Bottlenecks Results

  16. Comparison with CSFQ

  17. 100t/s 200t/s 1 1 3000t/s 2 2 P Q 4.65Mb/s 26ms 30 30 One-Hop Configuration

  18. One-Hop Results

  19. Related Work • WFQ, IntServ • DiffServ • CSFQ • User-share differentiation

  20. Conclusion and Future Work • Proportional bandwidth allocation • A modified RED algorithm (TRED), no per-flow state, scalable • Routers make local bandwidth allocation and isolation • Sender-based and receiver-based • Experiment with more realistic traffic load and complex topologies

More Related