Maximizing end to end network performance
Download
1 / 19

Maximizing - PowerPoint PPT Presentation


  • 256 Views
  • Updated On :

Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 5, 2001 Introduction Applications experience network performance from a end customer perspective Providing end-to-end performance has two aspects Bandwidth Reservation Performance Tuning

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Maximizing' - omer


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Maximizing end to end network performance l.jpg

MaximizingEnd-to-End Network Performance

Thomas Hacker

University of Michigan

October 5, 2001


Introduction l.jpg
Introduction

  • Applications experience network performance from a end customer perspective

  • Providing end-to-end performance has two aspects

    • Bandwidth Reservation

    • Performance Tuning

  • We have been working to improve actual end-to-end throughput using Performance Tuning

  • This work allows applications to fully exploit reserved bandwidth


Improve network performance l.jpg
Improve Network Performance

  • Poor network performance arises from a subtle interaction between many different components at each layer of the OSI network stack

  • Physical

  • Data Link

  • Network

  • Transport

  • Application


Tcp bandwidth limits mathis equation l.jpg
TCP Bandwidth Limits – Mathis Equation

  • Based on characteristics from physical layer up to transport layer.

  • Hard Limits

  • TCP Bandwidth, Max Packet Loss


Packet loss and mss l.jpg
Packet Loss and MSS

  • If the minimum link bandwidth between two hosts is OC-12 (622 Mbps), and the average round trip time is 20 msec, the maximum packet loss rate necessary to achieve 66% of the link speed (411 Mbps) is approximately 0.00018%, which represents only 2 packets lost out of every 100,000 packets.

  • If MSS is increased from 1500 bytes to 9000 bytes (Jumbo frames), limit on TCP BW will rise by a factor of 6.




Parallel tcp connections a clue l.jpg
Parallel TCP Connections…a clue

SOURCE:

Harimath Sivakumar, Stuart Bailey, Robert L. Grossman. “PSockets: The Case for Application-level Network Striping for Data Intensive Applications using High Speed Wide Area Networks,” SC2000: High-Performance Network and Computing Conference, Dallas, TX, 11/00


Why does this work l.jpg
Why Does This Work?

  • Assumption is that network gives best effort throughput for each connection

  • But end-to-end performance is still poor, even after tuning the host, network, and application

  • Parallel Sockets are being used in GridFTP, Netscape, Gnutella, Atlas, Storage Resource Broker, etc.


Packet loss l.jpg
Packet Loss

  • Bolot* found that Random losses are not always due to congestion

    • local system configuration (txqueuelen in Linux)

    • Bad cables (noisy)

  • Packet losses occur in bursts

  • TCP throttles transmission rate on ALL packet losses, regardless of the root cause

  • Selective Acknowledgement (SACK) helps, but only so much

* Jean-Chrysostome Bolot. “Characterizing End-to-End packet delay and loss in the Internet.”,

Journal of High Speed Networks, 2(3):305--323, 1993.



Example l.jpg

Number of Connections

Aggregate Bandwidth

1

100

50 Mb/sec

2

100+100

100 Mb/sec

3

100+100+100

150 Mb/sec

4

4 (100)

200 Mb/sec

5

5 (100)

250 Mb/sec

Example

MSS = 4418, RTT = 70 msec, p = 1/10000 for all connections


Measurements l.jpg
Measurements

  • To validate theoretical model, 220 4 minute transmissions performed from U-M to NASA AMES in San Jose, CA

  • Bottleneck was OC-12, MTU=4418

  • 7 runs MSS=4366, 1 to 20 sockets

  • 2 runs MSS=2948, 1 to 20 sockets

  • 2 runs MSS=1448, 1 to 20 sockets

  • Iperf used for transfer, Web100 used to collect TCP observations on sender side





Sunnyvale denver abilene link l.jpg
Sunnyvale – Denver Abilene Link

Initial Tests

Yearly Statistics



Conclusion l.jpg
Conclusion

  • High Performance Network Throughput is possible with a combination of host, network and application tuning along with using parallel TCP connections

  • Parallel TCP Sockets mitigate negative effects of packet loss in random congestion regime

  • Effects of Parallel TCP Sockets similar to using larger MSS

  • Using Parallel Sockets is aggressive, but as fair as using large MSS


ad