1 / 21

Congestion Avoidance

Congestion Avoidance. Congestion Avoidance. TCP congestion control strategy: Increase load until congestion occurs, then back off from this point Needs to create losses to determine connection bandwidth Alternative:

kifer
Download Presentation

Congestion Avoidance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion Avoidance CS 332

  2. Congestion Avoidance • TCP congestion control strategy: • Increase load until congestion occurs, then back off from this point • Needs to create losses to determine connection bandwidth • Alternative: • Predict when congestion is about to happen, then reduce host sending rates just before packets start being dropped. • Not widely adopted at this time CS 332

  3. DECbit • Designed for Digital Network Architecture (DNA) • Connectionless network with connection-oriented transport protocol (sound familiar?) • General idea • Router monitors load, set binary congestion bit when congestion imminent • Receiver copies congestion bit into ACK it sends back • Sender cuts its sending rate CS 332

  4. DECbit Details • Router measures average queue length over previous busy+idle cycle, plus current busy cycle. • If above average is >= 1, then set congestion bit • Value of 1 seems to optimize power • Tradeoff between higher throughput and longer delay • Host maintains a congestion window • If less than 50% of last windows worth of packets have congestion bit set, increase window by one packet • Else decrease window size to 0.875 times current value • Note additive increase/multiplicative decrease CS 332

  5. Random Early Detection (RED) • Similar to DECbit • Invented by Sally Floyd and Van Jacobson, early 90s • Designed to be used with TCP • Two differences between RED and DECbit • RED implicitly notifies of imminent congestion by dropping a packet, thus causing timeout or duplicate ACK • When RED drops a packet and how it decides which to drop (DECbit just drops when queue fills) CS 332

  6. RED Philosophy • Philosophy: drop a few packets before buffer is exhausted in hope that this will avoid having to drop lots of packets later (note could have simply marked packets instead of dropping them) • Queuing philosophy: early random drop • Drop arriving packet with some drop probability whenever queue length exceeds some drop level • Algorithm defines: • How to monitor queue length • When to drop packet CS 332

  7. RED (cont.) • Compute average queue length similar to TCP timeout: AvgLen = (1 – Weight)× AvgLen + Weight × SampleLen 0 < Weight < 1 Effectively low pass filter to handle bursty nature of traffic CS 332

  8. More RED • Two parameters: MinThreshold, MaxThreshold if (AvgLen  MinThreshold) { queue_packet(); } else if (MinThreshold < AvgLen < MaxThreshold){ calculate probability P; drop arriving packet with probability P; } else if (MaxThreshold  AvgLen) { drop arriving packet; } CS 332

  9. Still More RED • Rationale: if AvgLen reaches MaxThreshold, then gentle approach isn’t working (though research has indicated that a more smooth transition to complete dropping might be more appropriate) P= MaxP × (AvgLen – MinThreshold)/(MaxThreshold – MinThreshold) CS 332

  10. More RED than you can shake a stick at • A Problem: As is, packet drops not well distributed in time. • Occur in clusters • Because packet arrivals from a connection are likely to arrive in bursts, this clustering causes multiple drops in single connection • Bad, since only need one drop per round trip, to slow, whereas lots of drops could send connection into slow start CS 332

  11. RED just won’t go away… • Solution: Make P a function of both AvgLen and how long since last packet dropped: TempP = MaxP × (AvgLen – MinThreshold)/(MaxThreshold – MinThreshold) P = TempP/(1 – count × TempP) • count: how many packets have been queued while AvgLen has been between two thresholds • Note that larger count => larger P • Spreads out occurrence of drops CS 332

  12. RED again • Because packet drops are random, flows that use more bandwidth have higher probability of packet drop, so a sense of fairness built in (sort of) • At times, queue length will exceed MaxThreshold (though AvgLen may not). Need extra space in queue above MaxThreshold to handle these bursts without forcing router into tail drop mode CS 332

  13. Tuning RED • If traffic bursty, MinThreshold should be large enough to allow link utilization at fairly high level • MaxThreshold – MinThreshold should be larger than typical increase in calculated queue length during on RTT (set MaxThreshold to twice MinThreshold) • From time router drops packet to time router sees relief is at least one RTT, so makes no sense to respond to congestion on time scales less than one RTT (100ms good rule). Choose weight so that changes on time scale less than RTT are filtered out • Caveat: These all depend on traffic mix (I.e. network workload). Active area of research CS 332

  14. Source Based Congestion Avoidance • Key: watch for clues that router queues building up • Scheme 1: Congestion window increases as in TCP, but every two round trip delays, check if current RTT is greater than avg of min and max observed RTT. If so, decrease window by one-eighth • Scheme 2: Every RTT, increase window by one packet. Compare throughput achieved to throughput with window one packet smaller (i.e. find slope of the throughput vs window curve). If difference less than half throughput achieved when only one packet in network, then decrease window by one packet. (Throughput calculated as (num bytes outstanding in network)/RTT) CS 332

  15. TCP Vegas • Metaphor: driving on ice. Speedometer (window size) says you’re going 30mph, but you know (observed throughput) you’re only going 10. Extra energy absorbed by tires (buffers) • TCP Vegas idea: measure and control amount of “extra” data in network (i.e. data source would not have transmitted if trying to match bandwidth) • Too much extra data => delay and congestion • Too little extra data => slow response to transient increases in bandwidth CS 332

  16. TCP Vegas Congestion window Avg sending rate (throughput) Avg queue size at bottleneck CS 332

  17. TCP Vegas • BaseRTT: RTT of packet when flow not congested (set to minimum observed RTT) • ExpectedRate = CongestionWindow/BaseRTT (CongestionWindow is from TCP. Assumed here to be equal to num bytes in transit) • ActualRate: Record RTT for distinguished packet, count bytes sent between packet transmit and return of ACK, divide this by RTT. Done once per round trip • Compare ActualRate to ExpectedRate and adjust window accordingly CS 332

  18. TCP Vegas • Diff = ExpectedRate – ActualRate • Must be nonnegative or we need to change BaseRTT • ,  with  <  •  corresponds roughly to too little extra data in network •  corresponds roughly to too much extra data in network • If Diff <, increase window linearly during next RTT • If Diff >, decrease window linearly during next RTT • If  < Diff <, leave window alone CS 332

  19. Intuition • Farther actual throughput gets from expected throughput, more congestion in network, sending should be reduced • Actual throughput gets too close to expected throughput, then in danger of underutilizing available bandwidth • Goal is to keep between  and  extra bytes in network CS 332

  20. TCP Vegas Congestion Window ExpectedRate (colored line), ActualRate (black line), shaded area is region between  and  thresholds CS 332

  21. TCP Vegas • ,  compared to throughput rates, so typically given in KBps. • Intuition: how many extra buffers connection is occupying in network • Ex. BaseRTT = 100ms, packet size 1KB,  = 30 KBps,  = 60 KBps. So in one RTT, have between 3 KB and 6 KB in network (I.e. 3 to 6 packets, or equivalently 3 to 6 extra buffers in network) • In practice setting  to one buffer and  to three buffers works well • TCP Vegas decreases window linearly (so why isn’t it unstable?) CS 332

More Related