1 / 20

“ Congestion Control and Avoidance ” [Van Jacobson and M. Karels, 1998]

CS244 Spring 2014 Congestion Control. “ Congestion Control and Avoidance ” [Van Jacobson and M. Karels, 1998]. Sachin Katti. Context. Van Jacobson Formerly at LBL Internet pioneer Now at PARC Inventor tcpdump, traceroute Michael J. Karels Very involved in BSD development

doria
Download Presentation

“ Congestion Control and Avoidance ” [Van Jacobson and M. Karels, 1998]

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS244 Spring 2014 Congestion Control “Congestion Control and Avoidance”[Van Jacobson and M. Karels, 1998] Sachin Katti

  2. Context Van Jacobson • Formerly at LBL • Internet pioneer • Now at PARC • Inventor tcpdump, traceroute Michael J. Karels • Very involved in BSD development • Replaced Bill Joy as developer Cited almost 4,000 times.

  3. Context Q: How did TCP work before 1988? Congestion collapse: • Breakdowns in performance noted in 1986 on NSFNet. • 40Kb/s links operating as slow as 32b/s. • NSFNet was a forerunner of today’s Internet backbone (from 1986 to 1995).

  4. Quote ChiragSangani : One thing I like about the paper is how it recognizes that performance issues with TCP are not a fault of the protocol itself, but of its implementations.

  5. Main contributions Seven new algorithms: • RTT Variance estimation • Exponential retransmit timer backoff • Slow-start • More aggressive receiver ack policy • Dynamic window sizing on congestion • Karn’s algorithm • Fast retransmit Paper explores the first 5.

  6. Packet Conservation ‘Conservation of packets’ principle: For a connection ‘in equilibrium’, i.e., running stably with a full window of data in transit… A new packet shouldn’t be put into the network until an old packet leaves. Q: How does TCP accomplish this?

  7. Packet conservation “There are only three ways for packet conservation to fail: • The connection doesn’t get to equilibrium, or • A sender injects a new packet before an old packet has exited, or • The equilibrium can’t be reached because of resource limits along the path.”

  8. Getting to equilibriumSlow-start Q: What is slow-start trying to accomplish? Q: How long does it take slow-start to reach equilibrium? Q: Why is AIMD more aggressive reducing the window size than increasing it?

  9. Slow-start + AIMD Window size = min(advertized window, cwnd) cwnd Timeouts halved Slow start in operation until it reaches half of previous cwnd. Exponential “slow start” t

  10. Maintaining equilibrium Q: What mechanisms in TCP help it to maintain equilibrium?

  11. Maintaining Equilibrium Self-clocked: ACKs strobe new packets into the network Q: What are the consequences of self-clocking?

  12. RTT Variation Estimate Q: Why is it important to estimate RTT well? Q: What do they mean by: RTT and sR scale like ? Q: How did they improve RTT estimation?

  13. TCP properties Q: What is TCP congestion control trying to accomplish? What are its goal for: • Long-lived flows • Short-lived flows • The network operator • The end user Q: How well does it accomplish these goals?

  14. Quote Stephen Quinonez : Network traffic these days is largely due to Internet surfing, involving many short HTTP sessions. Congestion control algorithms from this paper are more applicable to long-lived TCP connections, like watching Hulu or Netflix.

  15. Involving the “Gateway” Q: How do they propose the router (“gateway”) gets involved in identifying congestion early? Q: Why might an early detection be helpful? Q: What methods have since been proposed and tried?

  16. Violating E2E • LazaUpatising : Specifically, the paper’s reliance on existing internet router technologies to signal congestion is a turn in the right direction. Instead of relying on external methods, such as a congestion bit for each packet, the exponential back off algorithm relies on packet drops as a signal for congestion. To relate back to an earlier paper, this reliance on packet drop goes along well with the principle of a dumb minimal network, connecting intelligent nodes • Also CJ Cullen and others …

  17. But does it? • Jessica Fisher:  An interesting idea that the authors proposed was moving the algorithm into the gateways as an attempt at bandwidth sharing fairness. The paper makes an excellent point that only the flows converge only in gateways so only gateways have enough information to control sharing and fair allocation. Though this seems to be in direct opposition of the end to end principle. However, the endpoints cannot be trusted to implement a fairness algorithm or protocol since the endpoints can be self-serving and thus antagonistic towards fairness when they could take more of the bandwidth than their fair share. Thus, the end to end principle doesn’t really apply to fairness enforcement so I agree with the authors that the gateway would be a great place to implement fairness protocols. • Also Jongho Shin

  18. TCP Shortcomings • Alexander Valderrama :However, for the majority of the paper there is one major assumption that they repeat, that for modern networks and devices is no longer true, that >99% of the time packet loss is due to congestion. In modern wireless and cellular networks packets are lost all the time due to the nature of the wireless medium, and due to the difficult nature of maintaining a connection as a device is in motion. • Also Wei Shi …

  19. Cheaters • David Silver: The algorithm depends on everyone else doing the right thing. If someone wanted to cheat and keep pumping packets into the network, everyone else would backoff to make room for it, essentially letting it cheat. • Also Suzanne Stathatos, Jesse Goodman and Wei Shi

  20. Meta Comments Style Q: What do you think of the style of the paper Q: It is rigorous or intuitive? Quote "...they did require tremendous effort: the upgrading of almost every end host on the Internet. Ironically, this helped create the Internet of today, which is so large that upgrading every host is nearly impossible."

More Related