html5-img
1 / 33

Active Queue Management for Web Traffic

Active Queue Management for Web Traffic. Mark Claypool, Bob Kinicki and Matt Hartling Worcester Polytechnic Institute Computer Science Department Worcester, MA 01609 {claypool,rek}@cs.wpi.edu. Outline. Motivation RED SHRED Algorithm Performance Metrics Web Traffic Model

haracha
Download Presentation

Active Queue Management for Web Traffic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Active Queue Management for Web Traffic Mark Claypool, Bob Kinicki and Matt Hartling Worcester Polytechnic Institute Computer Science Department Worcester, MA 01609 {claypool,rek}@cs.wpi.edu IPCCC04 April 16, 2004

  2. Outline • Motivation • RED • SHRED Algorithm • Performance Metrics • Web Traffic Model • Topology and Experimental Procedures • RED, SHRED and Drop Tail Results • Conclusions IPCCC04 April 16, 2004

  3. Motivation for Active Queue Management • Congestion is still an Internet problem. • Short TCP Web flows dominate the Internet. • Mice – Web objects yield short-lived flows {smaller than 2KB}. • TCP slow-start provides biased performance for short Web flows. IPCCC04 April 16, 2004

  4. Motivation for SHRED • TCP uses cwnd to limit a flow’s sending rate. • Fast Retransmit is ineffective when cwnd is less than four and for last three packets of a flow. • Retransmission Time Out [RTO] penalty is high in the first few packets of a flow. IPCCC04 April 16, 2004

  5. RED Routers • Random Early Detection (RED) detects congestion “early” by maintaining an exponentially-weighted average queue size. • RED probabilistically drops packets before the queue overflows to signal congestion to TCP sources. • RED attempts to avoid global synchronization and bursty packet drops. IPCCC04 April 16, 2004

  6. RED packet maxth minth minth::average queue length threshold for triggering probabilistic drops/marks. maxth ::average queue length threshold for triggering forced drops. IPCCC04 April 16, 2004

  7. RED Parameters qavg:: average queue size qavg = (1-wq) * qavg + wq* instantaneous queue size wq::weighting factor 0.001 <= wq<= 0.004 maxp:: maximum dropping/marking probability pb = maxp * (qavg – minth) / (maxth – minth) pa= pb / (1 – count * pb) buffer_size ::the size of the router queue in packets. IPCCC04 April 16, 2004

  8. RED Router Mechanism 1 Dropping/Marking Probability maxp 0 Min-threshold Queue Size Max-threshold AverageQueue Length (avgq) IPCCC04 April 16, 2004

  9. SHort-lived flow friendly REDSHRED Basic SHRED Idea To lower the drop probability for flows with small cwnds and to increase the drop probability for flows with relatively large cwnds. IPCCC04 April 16, 2004

  10. SHRED SHRED uses an ‘edge hint’ and inserts the current value of TCP cwnd into IP packet header. Upon packet arrival at SHRED router: cwndavg = (1 – wc) cwndavg + (wc) cwndsample where wc set to 0.002 IPCCC04 April 16, 2004

  11. SHRED SHRED modifiesminth and maxp minth-mod =minth +(maxth – minth) x (1 – cwndsample / cwndavg) maxp-mod =maxp x (maxth – minth-mod) / (maxth – minth) and re-computes pb pb = maxp-modx (qavg – minth-mod) /(maxth – minth-mod) IPCCC04 April 16, 2004

  12. SHRED Mechanismusing gentle RED IPCCC04 April 16, 2004

  13. Web Traffic Characterization • General Web flow modeling through congestion yields increased response times that in turn decrease the load generated by a Web client. • The model constructed has multiple objects per Web page downloaded in parallel and followed by a waiting period determined by the page generation rate. IPCCC04 April 16, 2004

  14. Web Traffic Characterization • For ns-2 simulations: Pareto II used to generate Web objects {min =12 bytes, max = 2MB, average object size = 10KB, 1.2 shape parameter}. • Canonical experiment :: 1 object per page (unless otherwise specified). IPCCC04 April 16, 2004

  15. Performance Metrics • Object transmission time - the time to transfer a single Web object from a server to the client. • Web response time – the time to download all objects in a Web page. • goodput (Mbps) - the rate at which packets arrive at the receiver. Goodput differs from throughput in that retransmissions are excluded from goodput. IPCCC04 April 16, 2004

  16. Performance Metrics • Jain’s fairness • For any given set of user throughputs (x1, x2, …, xn), the fairness index to the set is defined: f (x1, x2, …, xn) = • Percentage of packets dropped per flow. IPCCC04 April 16, 2004

  17. Simulation Topology Web Source Web Sink 100 Mbps 1 ms. 100 Mbps 1 ms. 10 Mbps 60 ms. Congested Router Router FTP Source FTP Sink RED Parameters Minth = 30 pkts maxth = 90 pkts maxp = 0.1 wq = 0.0008 avg pkt = 974 bytes maxq = 225 pkts … … FTP Source FTP Sink IPCCC04 April 16, 2004

  18. Experimental Procedures • Simulated RED, SHRED and Drop Tail. • A few early longer duration experiments were conducted to determine point when simulations were stable. • All experiments were 160 simulated seconds. • Measurements were taken after 20 seconds of warm-up period. • Simulated both FTP traffic and Web traffic using TCP Reno. IPCCC04 April 16, 2004

  19. Traffic Mixes • Web-only experiments • similar to RED-Tuning paper procedures. • Web-mixed experiments • FTP flows fixed at 10. • Web flows varied from 40 to 80% of bottlenecked bandwidth (10Mbps). IPCCC04 April 16, 2004

  20. Traffic Mixes (cont.) • FTP-mixed experiments • Web traffic load fixed at 50%. • FTP flows varied from 0 to 40. • FTP-only experiments • No Web flows. • FTP flows varied from 10 to 100. IPCCC04 April 16, 2004

  21. Fig 3a: Web-only Transmission Time CDF100% load – 140 active flows IPCCC04 April 16, 2004

  22. Fig 3b: Web-only Transmission TimeCDF Tail100% load – 140 active flows IPCCC04 April 16, 2004

  23. Fig4a: Web-mixed Transmission Time CDF 70% Web load + 10 FTP flows IPCCC04 April 16, 2004

  24. Fig 4b: Web-mixed Transmission TimeCDF Tail70% Web load + 10 FTP flows IPCCC04 April 16, 2004

  25. Fig. 5: Normalized Web Response TimeWeb-mixed Experiments IPCCC04 April 16, 2004

  26. Fig 6: Percent DropsWeb-mixed Experiments IPCCC04 April 16, 2004

  27. SHRED Performance • Web-only: SHRED has best performance. • Web-mixed: SHRED closer to uncongested performance. SHRED is better than RED and Drop Tail in “heavy-tail” of CDF due to RTO issues. • With normalized transmission time, SHRED ~4% better than RED which is 8% better than Drop Tail. • As number of flows increase, the SHRED benefit in packet drops widens. IPCCC04 April 16, 2004

  28. Fig. 7: Normalized Web Response TimeFTP-mixed Experiments IPCCC04 April 16, 2004

  29. Web Response Time • More simulations run where there are multiple objects per page. • Uniform random distribution of Web objects/page for (1 to 8), (1 to 16) and (1 to 32) for Web-only and Web-mixed experiments. • Response time – the time to download the whole page of objects. IPCCC04 April 16, 2004

  30. Fig. 8: Normalized Web Response TimeWeb-mixed Experiments IPCCC04 April 16, 2004

  31. Table 1FTP-only Goodput (Mbps) IPCCC04 April 16, 2004

  32. Table 2FTP-only Jain’s Fairness IPCCC04 April 16, 2004

  33. Conclusions • SHRED produces lower object transmission times than either RED or Drop Tail in We-only and mixed traffic simulations. • SHRED yields significant response time improvement when there are multiple objects per page. • SHRED improvements do not negatively impact FTP traffic. • Basic ‘SH’ scheme can be applied to other AQMs (e.g., research on PISA). IPCCC04 April 16, 2004

More Related