1 / 32

Zhili Zhao A.L.NarasimhaReddy Department of Electrical Engineering Texas A&M University

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes. Zhili Zhao A.L.NarasimhaReddy Department of Electrical Engineering Texas A&M University reddy@ee.tamu.edu June 23 2004, ICC.  Motivation Performance Evaluation Results & Analysis

dmitri
Download Presentation

Zhili Zhao A.L.NarasimhaReddy Department of Electrical Engineering Texas A&M University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes Zhili Zhao A.L.NarasimhaReddy Department of Electrical Engineering Texas A&M University reddy@ee.tamu.edu June 23 2004, ICC Texas A & M University

  2. Motivation Performance Evaluation Results & Analysis Discussion Agenda Texas A & M University

  3. Current Network Workload • Traffic composition in current network • ~60% Long-term TCP (LTRFs), ~30% Short-term TCP (STFs), ~10% Long-term UDP (LTNRFs) • Nonresponsive traffic is increasing • STF + LTNRF • Link capacities are increasing • What is the consequence? Texas A & M University

  4. UDP arrival rate UDP Goodput TCP Goodput The Trends • Long-term UDP traffic increases • Multimedia applications • Impact on TCP applications from the non-responsive UDP traffic Texas A & M University

  5. The Trends (cont’d) • Link capacity increases • Larger buffer memory required if current rules followed (buffer = BW * delay product) • Increasing queuing delay • Larger memories constrain router speeds • What if smaller buffers used in the future? Texas A & M University

  6. Overview of Paper • Study buffer management policies in the light of • Increasing Non-responsive loads • Increasing link speeds • Policies studied • Droptail • RED • RED with ECN Texas A & M University

  7. P 1 Pmax P1 0 AvgQlen Minth Q1 Maxth Queue Management Schemes • RED • RED-ECN (RED w/ ECN enabled) • Droptail Texas A & M University

  8. Agenda • Motivations Performance Evaluation • Results & Analysis • Discussion Texas A & M University

  9. Performance Evaluation • Different workloads w/ higher non-responsive loads: 60% • Different link capacities: 5Mb, 35Mb, 100Mb • Different buffer sizes: 1/3 or 1 or 3 *1 BWDP * Buffer size is in the unit of packet (1 packet = 1000 bytes) Texas A & M University

  10. Workload Characteristics • TCP(FTP): LTRFs • UDP(CBR): LTNRFs • 60%, 55%, 30% • 1Mbps or 0.5Mbps • Short-term TCP: STFs • 0%, 5%, 30% • 10packets/10s on average Texas A & M University

  11. Workload Characteristics (cont’d) • Number of flows under 35Mb link contributing to 60% non-responsive load * Each LTRNF sends at 1Mbps * Numbers of flows under 5Mb and 100Mb links are scaled accordingly Texas A & M University

  12. Performance Metrics • Realized TCP throughput • Average queuing delay • Link utilization • Standard deviation of queuing delay Texas A & M University

  13. TCPs TCP Sinks R1 R2 RED/DT, Tp=50ms CBRs CBR Sinks Simulation Topology Simulation Setup Texas A & M University

  14. Link Characteristics • Capacities between R1 and R2: 5Mb, 35Mb, 100Mb • Total round-trip propagation delay: 120ms • Queue management schemes deployed between R1 and R2: RED/RED-ECN/ Droptail Texas A & M University

  15. Agenda • Motivations • Performance Evaluation • Simulation Setup Results & Analysis • Discussion Texas A & M University

  16. Sets of Simulations • Changing buffer sizes • Changing link capacities • Changing STF loads Texas A & M University

  17. DropTail RED/RED-ECN Set 1: Changing Buffer Sizes • Correlation between average queuing delay & BWDP Texas A & M University

  18. 5Mb Link 100Mb Link Realized TCP Throughput • 30% STF load • Changing buffer size from 1/3 to 3 BWDPs Texas A & M University

  19. Realized TCP Throughput (cont’d) • TCP Throughput higher with DropTail • Difference decreases with larger buffer sizes • Avg. Qdelay from REDs much smaller than that from Droptail • RED-ECN marginally improves throughput over RED Texas A & M University

  20. Link Utilization • 30% STF load • Droptail has higher utilization with smaller buffers • Difference decreases with larger buffers Texas A & M University

  21. 5Mb Link 100Mb Link Std. Dev. Of Queuing Delay • 30% STF + 30% ON/OFF LTNRF load Texas A & M University

  22. Std. Dev. Of Queuing Delay (cont’d) • Droptail has comparable deviation at 5Mb link capacity • REDs have less deviation under higher buffer sizes and higher bandwidths • REDs are more suitable for jitter sensitive applications Texas A & M University

  23. ECN Disabled ECN Enabled Set 2: Changing Link Capacities • 30% STF load • Relative Avg Queuing Delay = Avg Queuing Delay/RT Propagation Delay Texas A & M University

  24. Relative Avg Queuing Delay • Droptail has Relative Avg Queuing Delay close to the buffer size (x * BWDP) • REDs has significantly smaller Avg Queuing Delay (~1/3 of DropTail) • Changing link capacities have almost no impact Texas A & M University

  25. Drop/Marking Rate • 30% STF load, 1 BWDP 1 Format: Drop Rate 2 Format: Drop Rate/Marking Rate Texas A & M University

  26. ECN Disabled ECN Enabled Set 3: Changing STF Loads • 1 BWDP • Normalized TCP throughput = TCP throughput / (UDP+TCP) throughput Texas A & M University

  27. Comparison of Throughputs • STF throughputs are almost constant over 3 queue management schemes • Difference of TCP throughputs decreases while STF load increases Texas A & M University

  28. Agenda • Motivations • Performance Evaluation • Simulation Setup • Results & Analysis Discussion Texas A & M University

  29. Discussion • Performance metrics of REDs comparable to or prevailing over DT w/ the existence of STF load and in high BWDP cases • Marginal improvement of long-term TCP throughput from RED-ECN with TCP-Sack compared to RED Texas A & M University

  30. Discussion (cont’d) • Minor impact on Avg Queuing Delay or TCP throughput by changing either link capacities or STF loads • With the existence of STFs: Texas A & M University

  31. Thank you June, 2004 Texas A & M University

  32. Related Work • S. Floyd et. al. “Internet needs better models” • C. Diot et. al. “Aggregated Traffic Performance with Active Queue Management and Drop from Tail” & “Reasons not to deploy RED” • K. Jeffay et. al. “Tuning RED for Web Traffic” Texas A & M University

More Related