1 / 35

Lightweight Active Router-Queue Management for Multimedia Networking

Lightweight Active Router-Queue Management for Multimedia Networking. M. Parris, K. Jeffay, and F.D. Smith Department of Computer Science University of North Carolina. Multimedia Computing and Networking (MMCN) January 1999. Outline. Problem Supporting multimedia on the Internet Context

vilmos
Download Presentation

Lightweight Active Router-Queue Management for Multimedia Networking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lightweight Active Router-Queue Management for Multimedia Networking • M. Parris, K. Jeffay, and F.D. Smith • Department of Computer Science • University of North Carolina • Multimedia Computing and Networking (MMCN) • January 1999

  2. Outline • Problem • Supporting multimedia on the Internet • Context • Drop Tail • RED • FRED • Approach • CBT • Evaluation • Conclusion

  3. Congestion Collapse Congestion Avoidance Throughput Throughput Goodput Goodput Congestion on the Internet • Drops are the usual way congestion is indicated • TCP uses congestion avoidance to reduce rate

  4. Internet Routers • Queue to hold incoming packets until can be sent • Typically, drop when queue is full (Drop Tail) 1 (Who gets dropped can determine Fairness) Avg 2 4 Router Queue 3

  5. P3 P1 P6 P5 P4 P2 Router-Based Congestion ControlSolution 2: Closed-loop congestion control • Normally, packets are only dropped when the queue overflows • “Drop-tail” queueing FCFS Scheduler Routing Inter- network ISP ISP Router Router

  6. Buffer Management & Congestion AvoidanceThe case against drop-tail FCFS Scheduler • Large queues in routers are a bad thing • End-to-end latency is dominated by the length of queues at switches in the network • Allowing queues to overflow is a bad thing • Connections that transmit at high rates can starve connections that transmit at low rates • Causes connections to synchronize their response to congestion and become unnecessarily bursty P3 P1 P6 P5 P4 P2

  7. Buffer Management & Congestion AvoidanceRandom early detection (RED) packet drop Average queue length Drop probability Maxqueue length • Use an exponential average of the queue length to determine when to drop • Accommodates short-term bursts • Tie the drop probability to the weighted average queue length • Avoids over-reaction to mild overload conditions Forced drop Maxthreshold Probabilisticearly drop Minthreshold No drop Time

  8. Buffer Management & Congestion AvoidanceRandom early detection (RED) packet drop Average queue length Drop probability Maxqueue length • Amount of packet loss is roughly proportional to a connection’s bandwidth utilization • But there is no a priori bias against bursty sources • Average connection latency is lower • Average throughput (“goodput”) is higher Forced drop Maxthreshold Probabilisticearly drop Minthreshold No drop Time

  9. Buffer Management & Congestion AvoidanceRandom early detection (RED) packet drop Average queue length Drop probability Maxqueue length • RED is controlled by 5 parameters • qlen— The maximum length of the queue • wq — Weighting factor for average queue length computation • minth — Minimum queue length for triggering probabilistic drops • maxth — Queue length threshold for triggering forced drops • maxp — The maximum drop probability Forced drop Maxthreshold Probabilisticearly drop Minthreshold No drop Time

  10. Random Early Detection Algorithm • The average queue length computation needs to be low pass filtered to smooth out transients due to bursts • ave = (1 – wq)ave + wqq for each packet arrival: calculate the average queue size ave if ave ≤ minth do nothing else if minth ≤ ave ≤ maxth calculate drop probability p drop arriving packet with probability p else if maxth ≤ ave drop the arriving packetd

  11. Buffer Management & Congestion AvoidanceRandom early detection (RED) packet drop Average queue length Drop probability Maxqueue length Forced drop Maxthreshold Probabilisticearly drop Minthreshold No drop Time Drop probability 100% Weighted Average Queue Length maxp min max

  12. Random Early DetectionPerformance • Floyd/Jacobson simulation of two TCP (ftp) flows

  13. Random Early Detection (RED) Summary • Controls average queue size • Drop early to signal impending congestion • Drops proportional to bandwidth, but drop rate equal for all flows • Unresponsive traffic will still not slow down!

  14. Vulnerability to Misbehaving Flows • TCP performance on a 10 Mbps link under RED in the face of a “UDP” blast

  15. Router-Based Congestion ControlDealing with heterogeneous/non-responsive flows Packet Scheduler • TCP requires protection/isolation from non-responsive flows • Solutions? • Employ fair-queuing/link scheduling mechanisms • Identify and police non-responsive flows (not here) • Employ fair buffer allocation within a RED mechanism Classifier

  16. Dealing With Non-Responsive Flows Isolating responsive and non-responsive flows Packet Scheduler • Class-based Queuing (CBQ) (Floyd/Jacobson) provides fair allocation of bandwidth to traffic classes • Separate queues are provided for each traffic class and serviced in round robin order (or weighted round robin) • n classes each receive exactly 1/n of the capacity of the link • Separate queues ensure perfect isolation between classes • Drawback: ‘reservation’ of bandwidth and state information required Classifier

  17. 1,400 UDP Bulk Transfer 1,200 1,000 800 TCP Throughput (Kbytes/s) CBQ 600 RED 400 FIFO 200 0 0 10 20 30 40 50 60 70 80 90 100 Time (seconds) Dealing With Non-Responsive Flows CBQ performance

  18. Dealing With Non-Responsive Flows Fair buffer allocation f1 FCFS Scheduler • Isolation can be achieved by reserving capacity for flows within a single FIFO queue • Rather than maintain separate queues, keep counts of packets in a single queue • Lin/Morris: Modify RED to perform fair buffer allocation between active flows • Independent of protection issues, fair buffer allocation between TCP connections is also desirable Classifier ... fn

  19. Flow Random Early Detect (FRED) • In RED, 10 Mbps  9 Mbps and 1Mbps  .9 Mbps • Unfair • In FRED, leave 1 Mbps untouched until 10 Mbps is down • Separate drop probabilities per flow • “Light” flows have no drops, heavy flows • have high drops

  20. 1,400 1,200 UDP Bulk Transfer 1,000 800 TCP Throughput (KBytes/sec) 600 FRED 400 RED 200 0 0 10 20 30 40 50 60 70 80 90 100 Time(secs) Flow Random Early Detection Performance in the face of non-responsive flows

  21. 1,400 UDP Bulk Transfer 1,200 1,000 800 CBQ TCP Throughput (KBytes/sec) 600 FRED 400 RED FIFO 200 0 0 10 20 30 40 50 60 70 80 90 100 Congestion Avoidance v. Fair-SharingTCP throughput under different queue management schemes • TCP performance as a function of the state required to ensure/approximate fairness

  22. Queue Management Recommendations • Recommend (Braden 1998, Floyd 1998) • Deploy RED • Avoid full queues, reduce latency, reduce packet drops, avoid lock out • Continue research into ways to punish aggressive or misbehaving flows • Multimedia • Does not use TCP • Can tolerate some loss • Price for latency is too high • Often low-bandwidth • Delay sensitive

  23. Outline • Problem • Supporting multimedia on the Internet • Context • Drop Tail • RED • FRED • Approach • CBT • Evaluation • Conclusion

  24. Goals • Isolation • Responsive (TCP) from unresponsive • Unresponsive: multimedia from aggressive • Flexible fairness • Something more than equal shares for all • Lightweight • Minimal state per flow • Maintain benefits of RED • Feedback • Distribution of drops

  25. ... fn≤ f1≤ f2≤ Class-Based Threshold (CBT) • Designate a set of traffic classes and allocate a fraction of a router’s buffer capacity to each class • Once a class is occupying its limit of queue elements, discard all arriving packets • Within a traffic class, further active queue management may be performed FCFS Scheduler Classifier

  26. Class-Based Threshold (CBT) • Isolation • Packets are classified into 1 of 3 classes • Statistics are kept for each class • Flexible fairness • Configurable thresholds determine the ratios between classes during periods of congestion • Lightweight • State per class and not per flow • Still one outbound queue • Maintain benefits of RED • Continue with RED policies for TCP

  27. CBT Implementation • Implemented in Alt-Q on FreeBSD • Three traffic classes: • TCP • marked non-TCP (“well behaved UDP”) • non-marked non-TCP (all others) • Subject TCP flows get RED and non-TCP flows to a weighted average queue occupancy threshold test

  28. Inter-network Router Router Class-Based ThresholdsEvaluation • Compare: • FIFO queuing (Lower bound baseline) • RED (The Internet of tomorrow) • FRED (RED + Fair allocation of buffers) • CBT • CBQ (Upper bound baseline)

  29. 6 ProShare - Unresponsive MM (210Kbps each) 240 FTP-TCP 1 UDP blast (10Mbps, 1KB) 0 20 60 110 160 180 CBT Evaluation Experimental design Inter-network Router Router • RED Settings: qsize = 60 pkts max-th = 30 pkts min-th = 15 pkts qweight = 0.002 max-pro = 0.1 • CBT Settings: mm-th = 10 pkts udp-th = 2 pkts Throughput and Latency

  30. 1,400 1,200 UDP Bulk Transfer 1,000 800 600 RED 400 FIFO 200 0 0 10 20 30 40 50 60 70 80 90 100 CBT EvaluationTCP Throughput CBQ CBT TCP Throughput (kbps) FRED Elapsed Time (s)

  31. 1,400 1,200 UDP Bulk Transfer 1,000 800 600 400 200 0 0 10 20 30 40 50 60 70 80 90 100 CBT EvaluationTCP Throughput CBQ CBT TCP Throughput (kbps) Elapsed Time (s)

  32. 70 FIFO 60 FIFO = 32% Loss 50 FRED = 36% Loss 40 FRED 30 20 RED RED = 30% Loss 10 CBQ = 0% Loss 0 160 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 CBT EvaluationProShare (marked UDP) latency Latency(ms) CBT CBT = 1% Loss CBQ Elapsed Time (s)

  33. Conclusion • RED/FIFO scheduling not sufficient • Aggressive unresponsive flows cause trouble • Low bandwidth unresponsive (VoIP) punished • CBT provides • Benefits of RED for TCP only traffic • Isolation of TCP vs. Unresponsive • Isolation of Aggressive vs. Low Bandwidth • Lightweight overhead

  34. Future Work • How to pick thresholds? • Implies reservation • Dynamic adjustments of thresholds (D-CBT) • Additional queue management for classes • Classes use “Drop Tail” now • Extension to other classes • Voice • Video

  35. Evaluation of Science? • Category of Paper • Science Evaluation (1-10)? • Space devoted to Experiments?

More Related