1 / 14

Buffer sizes for large multiplexers: TCP queueing theory and instability analysis

Buffer sizes for large multiplexers: TCP queueing theory and instability analysis. Gaurav Raina Damon Wischik Mark Handley Cambridge UCL UCL. Sizing router buffers SIGCOMM 2004. Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University Stanford University Stanford University.

porter
Download Presentation

Buffer sizes for large multiplexers: TCP queueing theory and instability analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Buffer sizes for large multiplexers: TCP queueing theory and instability analysis Gaurav Raina Damon Wischik Mark HandleyCambridge UCL UCL

  2. Sizing router buffers SIGCOMM 2004 Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University Stanford University Stanford University Abstract. All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = RTT*C, where RTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms*10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = RTT*C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with N flows requires no more than B = (RTT*C)/N, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM. http://tiny-tera.stanford.edu/~nickm/papers/index.html

  3. Queue simulations • Simulate a queue • fed by N flows, each of rate x pkts/sec (x=0.95 then 1.05 pkts/sec) • served at rate NC (C=1 pkt/sec) • with buffer size N1/2B (B=3 pkts) queue size arrival rate x time N=50 N=100 N=500 N=1000

  4. Fixed-Point Models for the End-to-End Performance Analysis of IP NetworksITC 2000 RJ Gibbens, SK Sargood, C Van Eijl, FP Kelly, H Azmoodeh, RN Macfadyen, NW Macfadyen Statistical Laboratory, Cambridge; and BT, Adastral Park Abstract. This paper presents a new approach to modeling end-to-end performance for IP networks. Unlike earlier models, in which end stations generate traffic at a constant rate, the work discussed here takes the adaptive behaviour of TCP/IP into account. The approach is based on a fixed-point method which determines packet loss, link utilization and TCP throughput across the network. Results are presented for an IP backbone network, which highlight how this new model finds the natural operating point for TCP, which depends on route lengths (via round-trip times and number of resources), end-to-end packet loss and the number of user sessions. http://www.statslab.cam.ac.uk/~frank/PAPERS/fpmee.html

  5. Fixed-point analysis traffic intensity x/C C*RTT=4 pkts log10 ofpkt lossprobability C*RTT=20 pkts C*RTT=100 pkts

  6. Fluid-based Analysis of a Network of AQM Routers Supporting TCP Flows with an Application to REDSIGCOMM 2000 Vishal Misra Wei-Bo Gong Don Towsley UMass Amherst UMass Amherst UMass Amherst Abstract. In this paper we use jump process driven Stochastic Differential Equations to model the interactions of a set of TCP flows and Active Queue Management routers in a network setting. We show how the SDEs can be transformed into a set of Ordinary Differential Equations which can be easily solved numerically. Our solution methodology scales well to a large number of flows. As an application, we model and solve a system where RED is the AQM policy. Our results show excellent agreement with those of similar networks simulated using the well known ns simulator. Our model enables us to get an in-depth understanding of the RED algorithm. Using the tools developed in this paper, we present a critical analysis of the RED algorithm. We explain the role played by the RED configuration parameters on the behavior of the algorithm in a network. We point out a flaw in the RED averaging mechanism which we believe is a cause of tuning problems for RED. We believe this modeling/solution methodology has a great potential in analyzing and understanding various network congestion control algorithms. ftp://gaia.cs.umass.edu/pub/Misra00_AQM.ps.gz

  7. Stability/instability of fluid model • For some values of C*RTT, the differential equation is stable • For others it is unstable and there are oscillations(i.e. the flows are partially synchronized) • When it is unstable, we can calculate the amplitude of the oscillations arrivalrate x/C time

  8. Instability plot traffic intensity x/C C*RTT=4 pkts log10 ofpkt lossprobability C*RTT=20 pkts C*RTT=100 pkts

  9. Illustration: 20 flows Standard TCP, single bottleneck link, no AQMservice=60 pkts/sec/flow, RTT=200 ms, #flows=20 B=20 pkts(Kelly rule) B=54 pkts(Stanford rule) B=240 pkts(rule of thumb)

  10. Illustration: 200 flows Standard TCP, single bottleneck link, no AQMservice=60 pkts/sec/flow, RTT=200 ms, #flows=200 B=20 pkts(Kelly rule) B=170 pkts(Stanford rule) B=2,400 pkts(rule of thumb)

  11. Illustration: 2000 flows Standard TCP, single bottleneck link, no AQMservice=60 pkts/sec/flow, RTT=200 ms, #flows=2000 B=20 pkts(Kelly rule) B=537 pkts(Stanford rule) B=24,000 pkts(rule of thumb)

  12. Some more stable alternatives Rule of thumb, no AQM buffer = bandwidth*delay or Stanford rule buffer = bandwidth*delay / sqrt(#flows) Rule of thumb with RED buffer=bandwidth*delay*{¼,1,4} Kelly rule, no AQM buffer={10,20,50} pkts Kelly rule, no AQM, ScalableTCP buffer={50,1000} pkts

  13. Scalable TCP: improving performance in highspeed wide area networks SIGCOMM CCR 2003 Tom Kelly CERN -- IT division Abstract. TCP congestion control can perform badly in highspeed wide area networks because of its slow response with large congestion windows. The challenge for any alternative protocol is to better utilize networks with high bandwidth-delay products in a simple and robust manner without interacting badly with existing traffic. Scalable TCP is a simple sender-side alteration to the TCP congestion window update algorithm. It offers a robust mechanism to improve performance in highspeed wide area networks using traditional TCP receivers. Scalable TCP is designed to be incrementally deployable and behaves identically to traditional TCP stacks when small windows are sufficient. The performance of the scheme is evaluated through experimental results gathered using a Scalable TCP implementation for the Linux operating system and a gigabit transatlantic network. The preliminary results gathered suggest that the deployment of Scalable TCP would have negligible impact on existing network traffic at the same time as improving bulk transfer performance in highspeed wide area networks. http://www-lce.eng.cam.ac.uk/~ctk21/scalable/

  14. Higher utilization • A fixed buffer cannot give high utilization over all values of C*RTT • By choosingB=c1+c2*log(C*RTT)/log(r)we get utilization r[Towsley]

More Related