1 / 34

Congestion Control

Congestion Control. Outline Queuing Discipline Reacting to Congestion Avoiding Congestion Quality of Service. Source. 1. 10-Mbps Ethernet. Router. Destination. 1.5-Mbps T1 link. 100-Mbps FDDI. Source. 2. Issues. Two sides of the same coin

hisano
Download Presentation

Congestion Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion Control Outline Queuing Discipline Reacting to Congestion Avoiding Congestion Quality of Service CS 561

  2. Source 1 10-Mbps Ethernet Router Destination 1.5-Mbps T1 link 100-Mbps FDDI Source 2 Issues • Two sides of the same coin • pre-allocate resources so at to avoid congestion • control congestion if (and when) is occurs • Two points of implementation • hosts at the edges of the network (transport protocol) • routers inside the network (queuing discipline) • Underlying service model • best-effort • multiple qualities of service (QoS) CS 561

  3. Source 1 Router Destination 1 Router Source 2 Router Destination 2 Source 3 Framework • Connectionless flows • sequence of packets sent between source/destination pair • maintain soft state at the routers • Taxonomy • router-centric versus host-centric • reservation-based versus feedback-based • window-based versus rate-based CS 561

  4. Throughput/delay Optimal Load load Evaluation • Fairness • Power (ratio of throughput to delay) CS 561

  5. Flow 1 Flow 2 Round-robin service Flow 3 Flow 4 Queuing Discipline • First-In-First-Out (FIFO) • does not discriminate between traffic sources • drop policy (tail-drop, random early drop) • Fair Queuing (FQ) • explicitly segregates traffic based on flows • ensures no flow captures more than its share of capacity • variation: weighted fair queuing (WFQ) • Problem? CS 561

  6. FQ Algorithm • Suppose clock ticks each time a bit is transmitted • Let Pi denote the length of packet i • Let Si denote the time when start to transmit packet i • Let Fi denote the time when finish transmitting packet i • Fi = Si + Pi • When does router start transmitting packet i? • if before router finished packet i - 1 from this flow, then immediately after last bit of i - 1 (Fi-1) • if no current packets for this flow, then start transmitting when arrives (call this Ai) • Thus: Fi = MAX (Fi - 1, Ai) + Pi CS 561

  7. Flow 1 Flow 2 Flow 1 Flow 2 Output (arriving) (transmitting) Output F = 10 F = 10 F = 8 F = 5 F = 2 (a) (b) FQ Algorithm (cont) • For multiple flows • calculate Fi for each packet that arrives on each flow • treat all Fi’s as timestamps • next packet to transmit is one with lowest timestamp • Not perfect: can’t preempt current packet • Example CS 561

  8. TCP Congestion Control • Idea • assumes best-effort network (FIFO or FQ routers) each source determines network capacity for itself • uses implicit feedback • ACKs pace transmission (self-clocking) • Challenge • determining the available capacity in the first place • adjusting to changes in the available capacity CS 561

  9. Additive Increase/Multiplicative Decrease • Objective: adjust to changes in the available capacity • New state variable per connection: CongestionWindow • limits how much data source has in transit MaxWin = MIN(CongestionWindow, AdvertisedWindow) EffWin = MaxWin - (LastByteSent - LastByteAcked) • Idea: • increase CongestionWindow when congestion goes down • decrease CongestionWindow when congestion goes up CS 561

  10. AIMD (cont) • Question: how does the source determine whether or not the network is congested? • Answer: a timeout occurs • timeout signals that a packet was lost • packets are seldom lost due to transmission error • lost packet implies congestion CS 561

  11. Source Destination … AIMD (cont) • Algorithm • increment CongestionWindow by one packet per RTT (linear increase) • divide CongestionWindow by two whenever a timeout occurs (multiplicative decrease) • In practice: increment a little for each ACK Increment = (MSS * MSS)/CongestionWindow CongestionWindow += Increment CS 561

  12. 70 60 50 40 KB 30 20 10 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 T ime (seconds) AIMD (cont) • Trace: sawtooth behavior CS 561

  13. Source Destination … Slow Start • Objective: determine the available capacity in the first place • Idea: • begin with CongestionWindow = 1 packet • double CongestionWindow each RTT (increment by 1 packet for each ACK) CS 561

  14. 70 60 50 KB 40 30 20 10 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 Slow Start (cont) • Exponential growth, but slower than all at once • Used… • when first starting connection • when connection goes dead waiting for timeout • Trace • Problem: lose up to half a CongestionWindow’s worth of data CS 561

  15. Sender Receiver Packet 1 Packet 2 ACK 1 Packet 3 ACK 2 Packet 4 ACK 2 Packet 5 Packet 6 ACK 2 ACK 2 Retransmit packet 3 ACK 6 Fast Retransmit and Fast Recovery • Problem: coarse-grain TCP timeouts lead to idle periods • Fast retransmit: use duplicate ACKs to trigger retransmission CS 561

  16. 70 60 50 40 KB 30 20 10 1.0 2.0 3.0 4.0 5.0 6.0 7.0 Results • Fast recovery • skip the slow start phase • go directly to half the last successful CongestionWindow (ssthresh) CS 561

  17. Congestion Avoidance • TCP’s strategy • control congestion once it happens • repeatedly increase load in an effort to find the point at which congestion occurs, and then back off • Alternative strategy • predict when congestion is about to happen • reduce rate before packets start being discarded • call this congestion avoidance, instead of congestion control • Two possibilities • router-centric: DECbit and RED Gateways • host-centric: TCP Vegas CS 561

  18. Queue length Current time T ime Previous Current cycle cycle A veraging interval DECbit • Add binary congestion bit to each packet header • Router • monitors average queue length over last busy+idle cycle • set congestion bit if average queue length > 1 • attempts to balance throughout against delay CS 561

  19. DECbit (cont) • Destination echoes bit back to source • Source records how many packets resulted in set bit • If less than 50% of last window’s worth had bit set • increase CongestionWindow by 1 packet • If 50% or more of last window’s worth had bit set • decrease CongestionWindow by 0.875 times CS 561

  20. Random Early Detection (RED) • Notification is implicit • just drop the packet (TCP will timeout) • could make explicit by marking the packet • Early random drop • rather than wait for queue to become full, drop each arriving packet with some drop probability whenever the queue length exceeds some drop level CS 561

  21. MaxThreshold MinThreshold A vgLen RED Details • Compute average queue length AvgLen = (1 - Weight) * AvgLen + Weight * SampleLen 0 < Weight < 1 (usually 0.002) SampleLen is queue length each time a packet arrives CS 561

  22. RED Details (cont) • Two queue length thresholds if AvgLen <= MinThreshold then enqueue the packet if MinThreshold < AvgLen < MaxThreshold then calculate probability P drop arriving packet with probability P if MaxThreshold <= AvgLen then drop arriving packet CS 561

  23. P(drop) 1.0 MaxP A vgLen MinThresh MaxThresh RED Details (cont) • Computing probability P TempP = MaxP * (AvgLen - MinThreshold)/ (MaxThreshold - MinThreshold) P = TempP/(1 - count * TempP) • Drop Probability Curve CS 561

  24. Tuning RED • Probability of dropping a particular flow’s packet(s) is roughly proportional to the share of the bandwidth that flow is currently getting • MaxP is typically set to 0.02, meaning that when the average queue size is halfway between the two thresholds, the gateway drops roughly one out of 50 packets. • If traffic id bursty, then MinThreshold should be sufficiently large to allow link utilization to be maintained at an acceptably high level • Difference between two thresholds should be larger than the typical increase in the calculated average queue length in one RTT; setting MaxThreshold to twice MinThreshold is reasonable for traffic on today’s Internet • Penalty Box for Offenders CS 561

  25. 70 60 50 40 KB 30 20 10 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 Time (seconds) 1100 900 700 Sending KBps 500 300 100 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 Time (seconds) 10 5 Queue size in router 0.5 1.0 1.5 4.0 4.5 6.5 8.0 Time (seconds) TCP Vegas • Idea: source watches for some sign that router’s queue is building up and congestion will happen too; e.g., • RTT grows • sending rate flattens 2.0 2.5 3.0 3.5 5.0 5.5 6.0 7.0 7.5 8.5 CS 561

  26. Algorithm • Let BaseRTT be the minimum of all measured RTTs (commonly the RTT of the first packet) • If not overflowing the connection, then ExpectRate = CongestionWindow/BaseRTT • Source calculates sending rate (ActualRate) once per RTT • Source compares ActualRate with ExpectRate Diff = ExpectedRate - ActualRate if Diff < a increase CongestionWindow linearly else if Diff > b decrease CongestionWindow linearly else leave CongestionWindow unchanged CS 561

  27. 70 60 50 40 KB 30 20 10 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 T ime (seconds) 240 200 160 CAM KBps 120 80 40 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 T ime (seconds) Algorithm (cont) • Parameters • a = 1 packet • b = 3 packets • Even faster retransmit • keep fine-grained timestamps for each packet • check for timeout on first duplicate ACK CS 561

  28. Sampler , Microphone Buffer , A D D A converter Speaker Realtime Applications • Require “deliver on time” assurances • must come from inside the network • Example application (audio) • sample voice once every 125us • each sample has a playback time • packets experience variable delay in network • add constant factor to playback time: playback point CS 561

  29. Playback Buffer Packet arrival Packet generation Playback Sequence number Buffer Network delay T ime CS 561

  30. Example Distribution of Delays 90% 97% 98% 99% 3 2 Packets (%) 1 50 100 150 200 Delay (milliseconds) CS 561

  31. Integrated Services • Service Classes • guaranteed • controlled-load • Mechanisms • signalling protocol • admission control • policing • packet scheduling CS 561

  32. Flowspec • Rspec: describes service requested from network • controlled-load: none • guaranteed: delay target • Tspec: describes flow’s traffic characteristics • average bandwidth + burstiness: token bucket filter • token rate r • bucket depth B • must have a token to send a byte • must have n tokens to send n bytes • start with no tokens • accumulate tokens at rate of r per second • can accumulate no more than B tokens CS 561

  33. Differentiated Services • Problem with IntServ: scalability • Idea: segregate packets into a small number of classes • e.g., premium vs best-effort • Packets marked according to class at edge of network • Core routers implement some per-hop-behavior (PHB) • Example: Expedited Forwarding (EF) • rate-limit EF packets at the edges • PHB implemented with class-based priority queues or WFQ CS 561

  34. Assured Forwarding (AF) customers sign service agreements with ISPs edge routers mark packets as being “in” or “out” of profile core routers run RIO: RED with in/out P(drop) 1.0 MaxP A vgLen Min Min Max Max out in out in DiffServ (cont) CS 561

More Related