1 / 10

Automatic TCP Buffer Tuning

Automatic TCP Buffer Tuning. Jeffrey Semke, Jamshid Mahdavi & Matthew Mathis. Presented By: Heather Heiman. Problem. A single host may have multiple connections at any one time, and each connection may have a different bandwidth.

jderry
Download Presentation

Automatic TCP Buffer Tuning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic TCP Buffer Tuning Jeffrey Semke, Jamshid Mahdavi & Matthew Mathis Presented By: Heather Heiman Cal Poly Network Performance Research Group

  2. Problem • A single host may have multiple connections at any one time, and each connection may have a different bandwidth. • Maximum transfer rates are often not achieved on each connection. • To improve transfer rates, systems are often manually tuned, but this requires an expert or system administrator. Cal Poly Network Performance Research Group

  3. Problem • If systems are manually tuned, TCP performance can still suffer because some connections will exceed the bandwidth-delay product, while other connections will be below the bandwidth-delay product. • “The bandwidth-delay product is the buffer space required at sender and receiver to obtain maximum throughput on the TCP connection over the path.” Cal Poly Network Performance Research Group

  4. Auto-Tuning • Auto-Tuning is the dynamic sizing of the bandwidth-delay product. • It is based upon network conditions and system memory availability. • Before implementing auto-tuning, the following features should be used: • TCP Extensions for High Performance • TCP Selective Acknowledgement Options • Path MTU Discovery Cal Poly Network Performance Research Group

  5. Auto-Tuning Implementation • The receive socket buffer size is set to be the operating system’s maximum socket buffer size. • The size of the send socket buffer is determined by three algorithms. • The first is based on network conditions. • The second balances memory usage. • The third sets a limit to prevent excessive memory use. Cal Poly Network Performance Research Group

  6. Types of Connections • default: connection type used the NetBSD 1.2 static default socket buffer size of 16kB • hiperf: connection type was hand-tuned for performance to have a static socket buffer size of 400kB, which was adequate for connections to the remote receiver. It is overbuffered for local connections. • auto: connections used dynamically adjusted socket buffer sizes according to the implementation described in section 2 of the paper Cal Poly Network Performance Research Group

  7. Testing Results Only one type of connection was run at any one time to correctly examine the performance and memory usage of each connection type. Cal Poly Network Performance Research Group

  8. Testing Results Concurrent data transfers were run from the sender to both the remote receiver and the local receiver. Cal Poly Network Performance Research Group

  9. Remaining Issues • In some implementations of TCP, the cwnd is allowed to grow even when the connection is not controlled by the congestion window causing the dynamically sized send buffers to unnecessarily expand, wasting memory. • Allowing large windows in TCP could cause a slow control-system response due to the long queues of packets. Cal Poly Network Performance Research Group

  10. Conclusion • TCP needs to be able to use resources more efficiently in order to keep connections from starving other connections of memory. • Auto-Tuning does not allow a connection to take more than its fair share of bandwidth. Cal Poly Network Performance Research Group

More Related