1 / 29

Removing Exponential Backoff from TCP

Removing Exponential Backoff from TCP. Amit Mondal Aleksandar Kuzmanovic EECS Department Northwestern University. http://networks.cs.northwestern.edu. TCP Congestion Control. Slow-start phase Double the sending ... ... rate each round-trip ... time

shel
Download Presentation

Removing Exponential Backoff from TCP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Removing Exponential Backoff from TCP Amit Mondal Aleksandar Kuzmanovic EECS Department Northwestern University http://networks.cs.northwestern.edu

  2. TCP Congestion Control • Slow-start phase • Double the sending ...... rate each round-trip ... time • Reach high throughput ...quickly

  3. TCP Congestion Control • Additive Increase –...Multiplicative Decrease • Fairness among flows

  4. TCP Congestion Control • Exponential • .backoff • System stability

  5. Our breakthrough • Exponential backoff • fundamentally wrong!

  6. Contribution • Untangle retransmit timer backoff mechanism • Challenge the need of exponential backoff in TCP • Demonstrate exponential backoff can be removed from TCP without causing congestion collapse • Incrementally deployable two-step task

  7. Implications • Dramatically improve performance of short-lived and interactive applications • Increase TCP's resiliency against low-rate (shrew attack) and high-rate (bandwidth flooding) DoS attack • Other impacts

  8. Background • Origin on RTO backoff • Adopted from classical Ethernet protocol • IP gateway similar to 'ether' in shared-medium Ethernet network • Exponential backoff is essential for Internet stability • "an unstable system (a network subject to random load shocks and prone to congestion collapse) can be stabilized by adding some exponential damping (exponential timer backoff) to its primary excitation (senders, traffic sources)“ [Jacobson88]

  9. Rationale behind revisions • No admission control in the Internet • No bound on number of active flows • Stability results in Ethernet protocol not applicable • IP gateway vs classical Ethernet • Classical Ethernet: Throughput reduces to zero in overloaded scenarios • IP gateway: Forwards packets at full capacity even in extreme congested scenarios • Dynamic network environment • Finite flow sizes and skewed traffic distribution • Increased bottleneck capacities

  10. Implicit Packet Conservation Principle • RTO > RTT • Karn-Partridge algorithm and Jacobson's algorithm ensures this • End-to-end performance cannot suffer if endpoints uphold the principle • Formal proof for single bottleneck case in paper • Extensive evaluation with network testbed • Single bottleneck • Multiple bottleneck • Complex topologies

  11. Experimental methodology • Testbed • Emulab • 64-bit Intel Xeon machine + FreeBSD 6.1 • RTT in 10ms - 200ms • Bottleneck 10Mbps • TCP Sack + RED • Workload • Trace-II: Synthetic HTTP traffic based on empirical distribution • Trace-I : Skewed towards shorter file-size • Trace-III: Skewed towards longer file-size • NS2 simulations

  12. Evaluation • TCP*(n) : sub exponential backoff algorithms • No backoff for first “n” consecutive timeouts • Impact of RTO backoff mechanism on response time • Impact of minRTO and initRTO on end-to-end performance

  13. Sub-exponential backoff algorithms End-to-end performance does not degrade after removing exponential backoff from TCP Trace-I Trace-II Trace-III

  14. Impact of (minRTO, initRTO) parameters • RFC 2988 recommendation • (1.0s, 3.0s) • Current practice • (0.2s, 3.0s) • Aggressive version • (0.2s, 0.2s)

  15. Poor performance of (1.0s,3.0s) RTO pair, the CCDF tail is heaviest • Improved performance both for (0.2s, 3.0s) and (0.2s, 02s) pair Impact of minRTO and initRTO Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(∞) TCP*(3)

  16. Role of bottleneck capacity TCP*(∞) out performs classical TCP independent of bottleneck capacity

  17. Dynamic environments • ON-OFF flow arrival period • Inter-burst: 50ms – 10s

  18. Dynamic environments • ON-OFF flow arrival period • Inter-burst: 1 sec Time series of active connections

  19. TCP variants and Queuing disciplines • TCP Tahoe, TCP Reno, TCP Sack • Droptail, RED The backoff-less TCP stacks outperform regular stacks irrespective of TCP versions and queuing disciplines

  20. S1 S2 L1 L2 R1 R2 R3 R4 S0 C0 L0 C1 C2 p1 p2 Multiple bottlenecks • Dead packets • Topology • Packets that exhaust network resources upstream, but are then dropped downstream In multiple bottleneck scenario there is a chance that dead packets impact the performance of flows sharing the upstream bottleneck. We do modeling and extensive experiment to explore such scenarios

  21. Impact on network efficiency Fraction of dead packet at upstream bottleneck: • < 5% flows experience multiple bottleneck • α = 0.002475 for (1%, 5%) very small

  22. Impact on end-to-end performance • What happens if the percent of multiple-bottleneck flows increases dramatically? • What is the impact of backoff-less TCP approach on end-to-end performance in such scenarios? • Emulab experiment • Set L0/(L0+L1)= 0.25 >> current situation

  23. Impact on end-to-end performance Improves response times distributions of both set of flows Similar result as Trace-II Multiple-bottlenecked flows improve their response times without causing catastrophic effect other flows even when their presence is significant Trace-II Multiple-bottlenecked flows improve response times, while upstream single-bottlenecked flows only marginally degrades response times Trace-I Trace-III

  24. Response times distribution improves in absence of p2p traffic The improvement is more significant in presence of p2p traffic Realistic network topologies • Orbis-scaled HOT topology • 10 Gbps core link • 100 Mbps server edge link • 1 – 10Mbps client side link • 10ms link delay • Workload • HTTP • HTTP + P2P

  25. Incremental deployment • TCP's performance degrades non-negligibly when present with TCP*(∞) • Two-step Task • TCP to TCP*(3) • TCP*(3) to TCP*(∞)

  26. Summary • Challenged the need of RTO backoff in TCP • End-to-end performance can only improve if endpoints uphold implicit packet conservation principle • Extensive testbed evaluation for single bottleneck and multiple bottleneck scenario, and with complex topologies • Incrementally deployable two-step task

  27. Thank you

  28. Impact of minRTO and initRTO Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(3) TCP*(∞)

  29. Impact of minRTO and initRTO Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(3) TCP*(∞)

More Related