1 / 16

Inline Path Characteristic Estimation to Improve TCP Performance in High Bandwidth-Delay Networks

Inline Path Characteristic Estimation to Improve TCP Performance in High Bandwidth-Delay Networks. Cesar Marcondes Anders Persson Medy Sanadidi Mario Gerla. HIDEyuki Shimonishi Takayuki Hama Tutomu Murase. Outline. Inferring Path Characteristics Bandwidth Estimation Buffer Estimation

ulema
Download Presentation

Inline Path Characteristic Estimation to Improve TCP Performance in High Bandwidth-Delay Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inline Path Characteristic Estimationto Improve TCP Performancein High Bandwidth-Delay Networks Cesar Marcondes Anders Persson Medy Sanadidi Mario Gerla HIDEyuki Shimonishi Takayuki Hama Tutomu Murase

  2. Outline • Inferring Path Characteristics • Bandwidth Estimation • Buffer Estimation • Description of Testbed • Improving TCP Performance using Inline Path Estimations • TCP Westwood BBE (Buffer and Bandwidth Estimation) • Conclusions and Future Work

  3. Motivation- Inferring Path Characteristics - Buffer Estimation Bandwidth Estimation Loss discrimination: Efficiency in leaky pipes Link capacity: Efficiency in large pipes Congestion detection: Friendliness to TCP-Reno Design novel protocols that make wise usage of estimations

  4. Internet Measurement Testbed Topology(Transpacific Link)

  5. Pathrate - Active (Infocom 2001) dispersion of packet pairs and packet trains, uncover possible capacity modes, apply statistic analysis to choose Capprobe - Active (Sigcomm 2004) - Packet pair measurement with filtering out delayed samples Capacity Estimation TCP Inline Measurements • TCP-Westwood • (Global Internet 2005) • - Packet pair like capacity estimation using TCP ACK packets • TCPProbe(Global Internet 2005) • - TCP inline version of CapProbe, it flips packets to obtain packet pairs in a delayed ACK scenario 5 3 4 2 1

  6. Measurement results (1)Off-line Measurement Convergence Speed 6sec 12sec 70sec

  7. Measurement results (2)TCP Inline Measurement Accuracy Convergence Original TCPW ≈ 60Mbps < 1sec TCPProbe ≈ 80Mbps 32sec

  8. Buffer Estimation • Idea: Correlate the loss of a packet with the RTT observed just before the loss • Buffer size estimation (RTT value a packet loss likely to happen) • Congestion estimation, loss discrimination (position of current RTT) Buffer size 15msec

  9. Measurement results (1)Off-line measurement for buffer capacity • Traceroute + Heavy Cross-Traffic Measurements • track down the location of bottleneck and observed an increase of 15msec on the average ICMP delay • RTT histogram of TCP flows Buffer size = 15msec High load >80% Aggregate 10 Flows Low load <10% Single Flow Overall loss rate = 0.005% Loss rate (RTT<147msec)= 0.005% Random loss ?

  10. Measurement results (2)Off-line measurement for random losses • Simulation Results • 135msec RTT + 15msec buffer • 80Mbps bottleneck like • 4.9 * 10-5 uniform dist. random loss 1 flow Simulation 17Mbps Measurement 15Mbps 10 aggregated flows Simulation 72Mbps Measurement 65Mbps • Low Rate UDP Measurement • 1-3Mbps UDP CBR traffic •  Loss rate = 0.001% 0.001-0.005% should be a good numberfor random loss rate

  11. TCP Westwood BBE • Optimize tradeoff between efficiency and friendliness to TCP-Reno • Robustness to buffer capacity variations and RTT variations • Optimize window reduction upon a packet loss, using • In line bandwidth estimation • In line buffer estimation

  12. TCPW-BBE Buffer Estimation • RTT dynamic range • RTTmin: minimum RTT (= propagation delay) • RTTcong: RTT value when packet losses likely to happen • Congestion = position of current RTT in the range RTTcong RTTmin RTTcong is an exponential average ofRTT right before losses 1 Congestion level c = RTT – RTTmin Losses due to congestion RTTcong – RTTmin Losses due to error 0 RTT before packet losses [msec]

  13. Congestion Window Reduction Congestion window reductionafter a packet loss 1 0.5 TCP-Reno Original TCPW TCPW-BBE Congestion 0 1 (RTT=2xRTTmin) Should be a random loss Smaller reduction Should be a congestion loss Halve the window

  14. Measurement Result (1)Buffer estimation and loss discrimination • Buffer Estimation • Inline estimating of buffer around 15 msec • * consistent with previous results • Inline Loss Discrimination • Window reduction depending on the congestion level estimated • * more robust to non-congestion errors

  15. Measurement Result (2)Efficiency and Friendliness Average Throughput of Multiple Trials Pathload Estimation: 61-74Mbps Avail. Conjectured Non-Congestion Error: 10-5 Cumulative Throughput during 2000 Secs Pathload Estimation : 30Mbps Avail. More Congestion losses

  16. Conclusions and Future Work • Path Characteristic Estimations as presented in this paper are reliable/fast/non-intrusive and present a major advantage for future Intelligent Network Stacks • In our measurement study, we presented, in addition to extensive Internet results, cross-validation based on more than one source and we observed that the inline capacity and buffer estimations are reliable. • In the specific scenario tested, TCP BBE was able to improve the performance when comparing to NewReno on a non-congested scenario • In the future, a combination of BE (Bandwidth Estimation from TCP Westwood) and TCPProbe can lead to faster capacity estimation. Additionally, new methods of using such estimates to improve start-up, cross-traffic identification, changes in route characteristics and ameliorate burstiness issues using buffer-awareness ought to be studied

More Related