1 / 20

TCP Westwood (with Faster Recovery)

TCP Westwood (with Faster Recovery). Claudio Casetti (casetti@polito.it) Mario Gerla (gerla@cs.ucla.edu) Scott Seongwook Lee (sslee@cs.ucla.edu) Saverio Mascolo (mascolo@deemail.poliba.it) Medy Sanadidi (medy@cs.ucla.edu) Computer Science Department

Download Presentation

TCP Westwood (with Faster Recovery)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TCP Westwood (with Faster Recovery) Claudio Casetti (casetti@polito.it) Mario Gerla (gerla@cs.ucla.edu) Scott Seongwook Lee (sslee@cs.ucla.edu) Saverio Mascolo (mascolo@deemail.poliba.it) Medy Sanadidi (medy@cs.ucla.edu) Computer Science Department University of California, Los Angeles, USA

  2. TCP Congestion Control • Based on a sliding window algorithm • Two stages: • Slow Start, initial probing for available bandwidth (“exponential” window increase until a threshold is reached) • Congestion Avoidance,”linear”window increase by one segment per RTT • Upon loss detection (coarse timeout expiration or duplicate ACK) the window is reduced to 1 segment (TCP Tahoe)

  3. Congestion Window of a TCP Connection Over Time

  4. Shortcomings of current TCP congestion control • After a sporadic loss, the connection needs several RTTs to be restored to full capacity • It is not possible to distinguish between packet loss caused by congestion (for which a window reduction is in order) and a packet loss caused by wireless interference • The window size selected after a loss may NOT reflect the actual bandwidth available to the connection at the bottleneck

  5. New Proposal:TCP with “Faster Recovery” • Estimation of available bandwidth (BWE): • performed by the source • computed from the arrival rate of ACKs, smoothed through exponential averaging • Use BWE to set the congestion window and the Slow Start threshold

  6. TCP FR: Algorithm Outline • When three duplicate ACKs are detected: • set ssthresh=BWE*rtt (instead of ssthresh=cwin/2 as in Reno) • if (cwin > ssthresh) set cwin=ssthresh • When a TIMEOUT expires: • set ssthresh=BWE*rtt (instead of ssthresh=cwnd/2 as in Reno) and cwin=1

  7. Experimental Results • Compare behavior of TCP Faster Recovery with Reno and Sack • Compare goodputs of TCP with Faster Recovery, TCP Reno and TCP Sack • with bursty traffic (e.g., UDP traffic) • over lossy links

  8. FR/Reno Comparison 1 TCP + 1 On/Off UDP (ON=OFF=100s) 5 MB buffer - 1.2s RTT - 150 Mb/s Cap. 0.16 0.14 0.12 0.1 normalized throughput FR 0.08 0.06 0.04 Reno 0.02 0 0 100 200 300 400 500 600 700 800 Time (sec)

  9. Goodput in presence of UDPDifferent Bottleneck Sizes 6 5 FR Reno Sack 4 3 Goodput [Mb/s] 2 1 0 0 20 40 60 80 100 120 140 160 Bottleneck bandwidth [Mb/s]

  10. Wireless and Satellite Networks • link capacity = 1.5 Mb/s - single “one-hop” connection 1.4e+06 Tahoe Reno FR 1.2e+06 1e+06 goodput (bits/s) 800000 600000 400000 200000 0 1e-10 1e-09 1e-08 1e-07 1e-06 1e-05 0.0001 0.001 bit error rate (logscale)

  11. Experiment Environment • New version of TCP FR called “TCP Westwood” • TCP Westwood is implemented in Linux kernel 2.2.10. • Link emulator can emulate: • link delay • loss event • Sources share bottleneck through router to destination.

  12. Bottleneck capacity 5Mb Packet loss rate 0.01 Larger pipe size corresponds to longer delay Link delay 300ms Bottleneck bandwidth 5Mb Concurrent on-off UDP traffic Goodput Comparison with Reno (Sack)

  13. Goodput comparison when TCP-W and Reno share the same bottleneck over perfect link 5 Reno start first 5 west start after 5 seconds 100 ms link delay Goodput comparison when TCP-W and Reno share the same bottleneck over lossy link(1%) 3 Reno start first then 2 Westwood 100 ms link delay TCP-W improves the performance over lossy link but does not catch the link. Friendliness with Reno

  14. Current Status & Open Issues • Extended testing of TCP WEswoh • Friendliness/greediness towards other TCP schemes • Refinements of bandwidth estimation process • Behavior with short-lived flows, and with large number of flows

  15. Extra slides follow

  16. Losses Caused by UDPDifferent RTT 2.6 2.4 2.2 FR Reno 2 Sack 1.8 Goodput [Mb/s] 1.6 1.4 1.2 1 0.8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 one-way RTT (s)

  17. Losses Caused by UDPDiffererent Number of Connections 11 10 9 FR1 8 Reno 7 Sack 6 Goodput [Mb/s] 5 4 3 2 1 0 0 5 10 15 20 25 30 no. of connections

  18. TCP over Lossy linksDifferent Bottleneck Size 10 1 Goodput [Mb/s] FR Reno Sack 0.1 0 20 40 60 80 100 120 140 160 Bottleneck bandwidth [Mb/s]

  19. Bursty trafficdiffererent number of connections 14 12 FR Reno Sack 10 8 Goodput [Mb/s] 6 4 2 0 0 5 10 15 20 25 30 no. of connections

  20. Cwnds of two TCP Westwood connections over lossy link concurrent UDP traffic timeshifted link delay 100ms Concurrent TCP-W connections goodput 5 connections (other2 are similar) link delay 100ms. Fairness of TCP Westwood

More Related