1 / 36

The Effects of Active Queue Management on Web Performance

The Effects of Active Queue Management on Web Performance. SICOMM 2003 Long Le, Jay Aikat, Kevin Jeffay, F.Donelson Smith. 29 th January, 2004 Presented by Sookhyun, Yang. Contents. Introduction Problem Statement Related Work Experimental Methodology Result and Analysis Conclusion.

fergus
Download Presentation

The Effects of Active Queue Management on Web Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Effects of Active Queue Management on Web Performance SICOMM 2003 Long Le, Jay Aikat, Kevin Jeffay, F.Donelson Smith 29th January, 2004 Presented by Sookhyun, Yang

  2. Contents • Introduction • Problem Statement • Related Work • Experimental Methodology • Result and Analysis • Conclusion

  3. Introduction • Drop policy • Drop tail : when a queue overflows • Active queue management (AQM) : before a queue overflows • Active queue management (AQM) • Keep the average queue size small in routers • RED (Random early detection) algorithm • Most widely studied and implemented • Various design issues of AQM • How to detect congestion • How to control for achieving a stable point for queue size • How congestion signal is delivered to the sender • Implicitly by dropping packets at the router • Explicitly by signal explicit congestion notification (ECN)

  4. Problem Statement • Goal • Compare the performance of control theoretic AQM algorithms with original randomized dropping paradigms • Considered AQM schemes • Control theoretic AQM algorithms • Proportional integrator (PI) controller • Random exponential marking (REM) controller • Original randomized dropping paradigms • Adaptive random early detection (ARED) controller • Performance metrics • Link utilization • Loss rate • Response time for each request/response transaction

  5. Contents • Introduction • Problem Statement • Related Work • Experimental Methodology • Platform • Calibration • Procedure • Result and Analysis • AQM Experiments with Packet Drops • AQM Experiments with ECN • Discussion • Conclusion

  6. minth maxth Drop packets linearly Drop all packets Drop probability 1 maxp AvgQLen minth maxth Random Early Detection • Original RED • Measure of congestion: weighted-average queue size (AvgQLen)

  7. minth maxth 2 * maxth Drop all packets Drop packets linearly Drop probability 1 maxp AvgQLen minth maxth 2 * maxth Random Early Detection • Modification of the original RED • Gentle mode • Mark or drop probability increases linearly

  8. minth maxth multiplicative decrease additive/multiplicative increase Random Early Detection • Weakness of RED • Does not consider the number of flows sharing a bottleneck link • In TCP congestion control mechanism • Packet mark or drop reduces the offered load by a factor of • Self-configuring RED • Adjustmaxpevery time AvgQLen • ARED • Adaptive and gentle refinements to original RED 1- 0.5/n(n: number of flows sharing the bottleneck link)

  9. link capacity, maximum RTT, expected number of active flows Control Theoretic AQM • Misra et al. • Applied control theory to develop a model for TCP/AQM dynamics • Used this model for analyzing RED • Limitation of RED • Response to changes in network traffic • Use of a weighted average queue length • PI controller (Hollot et al.) • Regulate queue length to target value called “queue reference” (qref) • Use instantaneous samples of the queue length at a constant sampling frequency • Drop probabilityp(kT) (q(kT): instantaneous sample of queue length, T=1/sampling-frequency) p(kT) = a * (q(kT) – qref) – b * (q((k-1)T) – qref) + p((k-1)T)

  10. p(t) prob(t) = 1 - ,where Φ >1 is a constant ( ) 1 Φ Control Theoretic AQM • REM scheme (Athuraliya et al.) • Periodically updates a congestion measure called “price” • Price p(t) • Rate mismatch between packet arrival and departure rate at the link • Queue difference between the actual queue length and target value • Drop probability p(t) = max( 0, p(t-1) + γ * (α * (q(t) – qref)) + x(t) –c ) c : link capacity, q(t) : queue length, qref : target value – queue size, x(t) : packet arrival rate

  11. Contents • Introduction • Problem Statement • Related Work • Experimental Methodology • Platform • Calibration • Procedure • Result and Analysis • AQM Experiments with Packet Drops • AQM Experiments with ECN • Discussion • Conclusion

  12. Uncongested Network 1Gbps 1Gbps Bottleneck 1000-SX fiber gigabit Ethernet NIC 100Mpbs Fast Ethernet NICs Intel-based machines with FreeBSD 4.5 Web request generator (browser) : 14 machines Web response generator (server) : 8 machines Total number of flows = 44 Platform • Emulate one peering link carrying web traffics between sources and destinations 100 Mbps Ethernet interface 3Com 10/100/1000 Ethernet switches Network monitor ISP 2 router ISP 1 router Ethernet Switches Ethernet Switches 100/1000 Mbps 1Gbps 1Gbps … … ALTQ extensions to FreeBSD (PI, REM, ARED) 1GHz Pentium Ⅲ 1GB of memory 1000-SX fiber gigabit Ethernet NIC 100Mpbs Fast Ethernet NICs 100Mbps 100Mbps Network monitor ISP2 Browser/Servers ISP1 Browser/Servers

  13. Monitoring Program • Program 1: monitoring router interface • Effects of the AQM algorithms • Log of queue size sampled every 10ms along • Number of entering packets • Number of dropped packets • Program 2: link-monitoring machine • Connected to the links between the routers • Hubs on the 100Mbps segments • Fiber splitters on the Gigabit link • Collect TCP/IP headers • Locally-modified version of the tcpdump utility • Log of link utilization

  14. Emulation of End-to-End Latency • Congestion control loop is influenced by RTT • Emulate different RTTs on each TCP connection (per-flow delay) • Locally-modified version of dummynet component of FreeBSD • Add a randomly chosen minimum delay to all packets from each flow • Minimum delay • Sampled from a discrete uniform distribution • Internet RTTs within the continental U.S. • RTT • Flow’s minimum delay + additional delay (caused by queues at the routers or on the end systems) • TCP window size = 16Kbyte on all end systems (widely used value)

  15. After random time requesting a webpage thinking Server’s service time = 0 request Web-Like Traffic Generation • Model of [13] • Based on empirical data • Empirical distributions describing the elements necessary to generate synthetic to generate synthetic HTTP workloads • Browser program and server program • Browser program logs response time for each request/response pair

  16. Calibrations • Offered loads • Network traffic resulting from emulating the browsing behavior of a fixed size population of web users • Three critical calibrationsbefore experiments • Only one primary bottleneck • 100Mbps links between two routers • Predictably controlled offered load on the network • Resulting packet arrival time-series (packet counts per ms) • Long-range dependent (LRD) behavior [14] • Calibration experiment • Configure the network connecting the routers at 1Gbps • Drop-tail queues having 2400 queue elements

  17. Calibrations One direction of the 1Gbps link

  18. Calibrations Heavy-tailed distribution for both user “think” time and response size [13]

  19. Procedure • Experimental setting • Offered loads by user populations • 80%, 90%, 98%, or 105% of the capacity of the 100Mbps link • Run for 120 min over 10,000,000 request/response exchanges • Collect data during 90min interval • Repeat three times for each AQM schemes PI, REM, ARED • Experimental focus • End-to-end response time for each request/response pair • Loss rate : fraction of IP datagram dropped at the link queue • Link utilization on the bottleneck link • Number of request/response exchanges completed

  20. Contents • Introduction • Problem Statement • Related Work • Experimental Methodology • Platform • Calibration • Procedure • Result and Analysis • AQM Experiments with Packet Drops • AQM Experiments with ECN • Discussion • Conclusion

  21. AQM Experiments with Packet Drops • Two target queue length of PI, REM, and ARED • Tradeoff between link utilization and queuing delay • 24 packets for minimum latency • 240 packets for high link utilization • Recommended in [1,6,8] • Set the maximum queue size sfficient to ensure drop-tail do not occur • Baseline • Conventional drop-tail FIFO queues • Queue size for drop-tail • 24, 240 packets : comparing with AQM schemes • 2400 packets : recently favorable buffering equivalent to 100ms at the link’s transmission speed (from mailing list)

  22. Drop-Tail Performance Queue Size for Drop-Tail Drop-tail queue size = 240

  23. AQM Experiments with Packet Drops Response Time at 80% Load AREM show some degradation relative to the results on the un-congested link at 80% load

  24. AQM Experiments with Packet Drops Response Time at 90% Load

  25. AQM Experiments with Packet Drops Response Time at 98% Load No AQM scheme can offset the performance degradation at 98% load

  26. AQM Experiments with Packet Drops Response Time at 105% Load All schemes degrades uniformly from the 98% case

  27. AQM Experiments with ECN • Explicitly signal congestion to end-systems with an ECN bit • Procedure of signal congestion with ECN • [Router] : mark a ECN bit in the TCP/IP header of the packet • [Receiver] : mark TCP header of the next outbound segment (typically an ACK) destined for sender of original marked segment • [Original sender] • React as if a single segment had been lost within a send window • Mark the next outbound segment to confirm that it reacted to the congestion • ECN has no effect on response time of PI, REM, and ARED up to 80% offered load

  28. AQM Experiments with ECN Response Time at 90% Load Both PI and REM provide response time performance that is both close to that on un-congested link

  29. AQM Experiments with ECN Response Time at 98% Load Degradation, but far superior to Drop tail

  30. AQM Experiments with ECN Response Time at 105% Load REM shows the most significant improvement in performance with ECN ECN has very little effect on the performance Of ARED

  31. AQM Experiments with Packet Drops or with ECN Loss ratio/Completed requests/Link utilization

  32. Summary • For 80% load • No AQM scheme provides better response time performance than simple drop-tail FIFO queue management • Not changed by the AQM schemes with ECN • For 90% load or greater without ECN • PI is better than drop-tail and other AQM schemes without ECN • With ECN • Both PI and REM provide significant response time improvement • ARED with recommended parameter settings • poorest response time performance • Lowest link utilization • Not changed with ECN

  33. Discussion • Positive Impact of ECN • Response time performance under PI and REM with ECN at loads of 90% and 98% • 90% load: approximately achieved on an un-congested network

  34. Discussion • Performance gap between PI and REM with packet dropping was closed through the addition of ECN • Difference in performance between ARED and the other AQM schemes • PI and REM operate in “byte mode” in default, but ARED in “packet mode” • Gentle mode in REM • PI and REM periodically sample the queue length when deciding to mark packets, but ARED uses a weighted average

  35. Contents • Introduction • Problem Statement • Related Work • Experimental Methodology • Platform • Calibration • Procedure • Result and Analysis • AQM Experiments with Packet Drops • AQM Experiments with ECN • Summary • Conclusion

  36. Conclusion • Unlike a similar earlier negative study on the use of AQM, the AQM scheme with ECN can be realized in practice • Limitation of this paper • Comparison between only two classes of algorithms • Control theoretic principles • Original randomized dropping paradigm • Studied a link carrying only web-like traffic • More realistic mixed of HTTP, other TCP traffic, and UDP traffic

More Related