1 / 53

Congestion Control

Congestion Control. Foreleser: Carsten Griwodz Email: griff@ifi.uio.no. Congestion. 2 problem areas Receiver capacity Approached by flow control Network capacity Approached by congestion control Possible approach to avoid both bottlenecks

maviles
Download Presentation

Congestion Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion Control Foreleser: Carsten Griwodz Email: griff@ifi.uio.no 1

  2. Congestion • 2 problem areas • Receiver capacity • Approached by flow control • Network capacity • Approached by congestion control • Possible approach to avoid both bottlenecks • Receiver capacity: “actual window”, credit window • Network capacity: “congestion window” • Valid send window = min(actual window, congestion window) • Terms • Traffic • All packets from all sources • Traffic class • All packets from all sources with a common distinguishing property, e.g. priority

  3. Persistent congestion Router stays congested for a long time Excessive traffic offered Transient congestion Congestion occurs for a while Router is temporarily overloaded Often due to burstiness Burstiness Average rate r Burst size b (#packets that appear at the same time) Token bucket model Congestion

  4. Congestion perfect • Reasons for congestion,among others • Incoming traffic overloads outgoing lines • Router too slow for routing algorithms • Too little buffer space in router • When too much traffic isoffered • Congestion sets in • Performance degrades sharply maximum transmission capacity of the subnet desirable packets delivered congested packets sent by application • Congestion tends to amplify itself • Network layer: unreliable service • Router simply drops packet due to congestion • Transport layer: reliable service • Packet is retransmitted Congestion => More delays at end-systems Higher delays => Retransmissions Retransmissions => Additional traffic

  5. Congestion Control • General methods of resolution • Increase capacity • Decrease traffic • Strategies • Repair • When congestion is noticed • Explicit feedback (packets are sent from the point of congestion) • Implicit feedback (source assumes that congestion occurred due to other effects) • Methods: drop packets, choke packets, hop-by-hop choke packets, fair queuing,... • Avoid • Before congestion happens • Initiate countermeasures at the sender • Initiate countermeasures at the receiver • Methods: leaky bucket, token bucket, isarithmic congestion control, reservation, …

  6. Repair • Principle • No resource reservation • Necessary steps • Congestion detected • Introduce appropriate procedures for reduction

  7. Repair by Packet dropping • Principle • At each intermediate system • Queue length is tested • Incoming packet is dropped if it cannot be buffered • We may not wait until the queue is entirely full • To provide • Connectionless service • No preparations necessary • Connection-oriented service • Buffer packet until reception has been acknowledged

  8. Output lines Input lines Repair by Packet dropping • Assigning buffers to queues at output lines 1. Maximum number of buffers per output line • Packet may be dropped although there are free lines 2. Minimal number of buffers per output line • Sequences to same output line (“bursts”) lead to drops 3. Dynamic buffer assignment • Unused lines are starved

  9. Repair by Packet dropping 4. Content-related dropping: relevance • Relevance of data connection as a wholeor every packet from one end system to another end system • Examples • Favor IPv6 packets with flow id 0x4b5 over all others • Favor packets of TCP connection(65.246.255.51,80,129.240.69.49,53051) over all others • Relevance of a traffic class • Examples • Favor ICMP packets over IP packets • Favor HTTP traffic (all TCP packets with source port 80) over FTP traffic • Favor packets from 65.246.0.0/16 over all others

  10. Repair by Packet dropping • Properties • Very simple • But • Retransmitted packets waste bandwidth • Packet has to be sent 1 / (1 - p) times before it is accepted • (p ... probability that packet will be dropped) • Optimization necessary to reduce the waste of bandwidth • Dropping packets that have not gotten that far yet • e.g. Choke packets

  11. Repair by Choke Packets • Principle • Reduce traffic during congestion by telling source to slow down • Procedure for router • Each outgoing line has one variable • Utilization u ( 0≤u≤1 ) • Calculating u: Router checks the line usage f periodically (f is 0 or 1) • u = a * u + ( 1 - a ) * f • 0 ≤ a ≤ 1 determines to what extent "history" is taken into account • u > threshold: line changes to condition "warning“ • Send choke packet to source (indicating destination) • Tag packet (to avoid further choke packets from down stream router) & forward it

  12. Repair by Choke Packets • Principle • Reduce traffic during congestion by telling source to slow down • Procedure for source • Source receives the choke packet • Reduces the data traffic to the destination in question by x1% • Source recognizes 2 phases(gate time so that the algorithm can take effect) • Ignore: source ignores further Choke packets until timeout • Listen: source listens if more Choke packets are arriving • yes: further reduction by X2%; go to Ignore phase • no: increase the data traffic

  13. Repair by Choke Packets • Hop-by-Hop Choke Packets • Principle • Reaction to Choke packets already at router (not only at end system) Plain Choke packets Hop-by-hop Choke packets B C B C A D A D E F E F • A heavy flow is established • Congestion is noticed at D • A Choke packet is sent to A • The flow is reduced at A • The flow is reduced at D • A heavy flow is established • Congestion is noticed at D • A Choke packet is sent to A • The flow is reduced at F • The flow is reduced at D

  14. Repair by Choke Packets • Variation • u > threshold: line changes to condition "warning“ • Procedure for router • Do not send choke packet to source (indicating destination) • Tag packet (to avoid further choke packets from down stream router) & forward it • Procedure at receiver • Send choke packet to sender • Other variations • Varying choke packets depending on state of congestion • Warning • Acute warning • For u instead of utilization • Queue length • ....

  15. Repair by Choke Packets • Properties • Effective procedure • But • Possibly many choke packets in the network • Even if Choke bits may be included in the data at the senders to minimize reflux • End systems can (but do not have to) adjust the traffic • Choke packets take time to reach source • Transient congestion may have passed when the source reacts • Oscillations • Several end systems reduce speed because of choke packets • Seeing no more choke packets, all increase speed again

  16. Repair with Fair Queuing • Background • End-system adapting to traffic (e.g. by Choke-Packet algorithm) should not be disadvantaged • Principle • On each outgoing line each end-system receives its own queue • Packet sending based on Round-Robin (always one packet of each queue (sender)) • Enhancement "Fair Queuing with Byte-by-Byte Round Robin“ • Adapt Round-Robin to packet length • But weighting is not taken into account • Enhancement "Weighted Fair Queuing“ • Favoring (statistically) certain traffic • Criteria variants • In relation to VPs (virtual paths) • Service specific (individual quality of service) • etc.

  17. Congestion Avoidance 18

  18. Avoidance • Principle • Appropriate communication system behavior and design • Policies at various layers can affect congestion • Data link layer • Flow control • Acknowledgements • Error treatment / retransmission / FEC • Network layer • Datagram (more complex) vs. virtual circuit (more procedures available) • Packet queueing and scheduling in router • Packet dropping in router (including packet lifetime) • Selected route • Transport layer • Basically the same as for the data link layer • But some issues are harder (determining timeout interval)

  19. Peak rate Avoidance by Traffic Shaping Original packet arrival Smoothed stream time • Motivation • Congestion is often caused by bursts • Bursts are relieved by smoothing the traffic (at the price of a delay) • Procedure • Negotiate the traffic contract beforehand (e.g., flow specification) • The traffic is shaped by sender • Average rate and • Burstiness • Applied • In ATM • In the Internet (“DiffServ” - Differentiated Services)

  20. Input lines Output lines Traffic Shaping with Leaky Bucket • Principle • Continuous outflow • Congestion corresponds to data loss • Described by • Packet rate • Queue length • Implementation • Easy if packet length stays constant (like ATM cells) Implementation with limited buffers Symbolic: bucket with outflow per time

  21. Traffic Shaping with Token Bucket • Principle • Permit a certain amount of data to flow off for a certain amount of time • Controlled by "tokens“ • Number of tokens limited • Implementation • Add tokens periodically • Until maximum has been reached) • Remove token • Depending on the length of the packet (byte counter) • Comparison • Leaky Bucket • Max. constant rate (at any point in time) • Token Bucket • Permits a limited burst

  22. Traffic Shaping with Token Bucket • Principle • Permit a certain amount of data to flow off for a certain amount of time • Controlled by "tokens“ • Number of tokens limited • Number of queued packets limited • Implementation • Add tokens periodically • Until maximum has been reached) • Remove token • Depending on the length of the packet (byte counter) • Comparison • Leaky Bucket • Max. constant rate (at any point in time) • Token Bucket • Permits a limited burst packet burst

  23. Traffic Shaping with Token Bucket • Principle • Permit a certain amount of data to flow off for a certain amount of time • Controlled by "tokens“ • Number of tokens limited • Number of queued packets limited • Implementation • Add tokens periodically • Until maximum has been reached) • Remove token • Depending on the length of the packet (byte counter) • Comparison • Leaky Bucket • Max. constant rate (at any point in time) • Token Bucket • Permits a limited burst

  24. A A B B Avoidance by Reservation: Admission Control • Principle • Prerequisite: virtual circuits • Reserving the necessary resources (incl. buffers) during connect • If buffer or other resources not available • Alternative path • Desired connection refused • Example • Network layer may adjust routing based on congestion • When the actual connect occurs

  25. Avoidance by ReservationAdmission Control sender • Sender oriented • Sender (initiates reservation) • Must know target addresses (participants) • Not scalable • Good security 1. reserve data flow 2. reserve 3. reserve receiver

  26. Avoidance by ReservationAdmission Control sender • Receiver oriented • Receive (initiates reservation) • Needs advertisement before reservation • Must know “flow” addresses • Sender • Need not to know receivers • More scalable • Insecure 3. reserve data flow 2. reserve 1. reserve receiver

  27. Avoidance by ReservationAdmission Control sender • Combination? • Start sender oriented reservation 1. reserve data flow 2. reserve reserve from nearest router 3. reserve receiver

  28. Principle Buffer reservation Implementation variant: Stop-and-Wait protocol One buffer per router and connection (simplex, VC=virtual circuit) Implementation variant: Sliding Window protocol m buffers per router and (simplex-) connection Properties Congestion not possible Buffers remain reserved, Even if there is no data transmission for some periods Usually only with applications that require low delay & high bandwidth rsvd for conn 1 1 1 2 2 3 3 Avoidance by Buffer Reservation unreserved buffers

  29. Avoidance by Isarithmic Congestion Control • Principle • Limiting the number of packets in the network by assigning "permits“ • Amount of "permits" in the network • A "permit" is required for sending • When sending: "permit" is destroyed • When receiving: "permit" is generated • Problems • Parts of the network may be overloaded • Equal distribution of the "permits" is difficult • Additional bandwidth for the transfer of "permits" necessary • Bad for transmitting large data amounts (e.g. file transfer) • Loss of "permits" hard to detect

  30. Avoidance: combined approaches • Controlled load • Traffic in the controlled load class experiences the network as empty • Approach • Allocate few buffers for this class on each router • Use admission control for these few buffers • Reservation is in packets/second (or Token Bucket specification) • Router knows its transmission speed • Router knows the number of packets it can store • Strictly prioritize traffic in a controlled load class • Effect • Controlled load traffic is hardly ever dropped • Overtakes other traffic

  31. 1 0 1 1 1 0 0 0 Version IHL DS Total length Identification D M Fragment offset Time to live Protocol Header checksum Source address Destination Address Avoidance: combined approaches • Expedited forwarding • Very similar to controlled load • A differentiated services PHB (per-hop-behavior) • Approach • Set aside few buffers for this class on each router • Police the traffic • Shape or mark the traffic • Only at senders, or at some routers • Strictly prioritize traffic in a controlled load class • Effect • Shapers drop excessive traffic • EF traffic is hardly ever dropped • Overtakes other traffic

  32. Internet Congestion Control TCP Congestion Control 33

  33. TCP Congestion Control • TCP limits sending rate as a function of perceived network congestion • Little traffic – increase sending rate • Much traffic – reduce sending rate • TCP’s congestion algorithm has four major “components”: • Additive-increase • Multiplicative-decrease (together AIMD algorithm) • Slow-start • Reaction to timeout events

  34. 16 8 4 2 1 TCP Congestion Control sender receiver Initially, the CONGESTION WINDOW is 1 MSS (message segment size) round 1 Then, the size increases by 1 for each received ACK until a threshold is reached or an ACK is missing sent packetsper round(congestion window) round 2 round 3 round 4 time

  35. sent packetsper round(congestion window) 65 20 16 40 10 50 60 70 30 25 35 75 55 45 80 15 8 4 5 1 2 time TCP Congestion Control Normally, the threshold is 64K Loosing a single packet (TCP Tahoe): • threshold drops to half CONGESTION WINDOW • CONGESTION WINDOW back to 1 Loosing a single packet (TCP Reno): • if notified by timeout: like TCP Tahoe • if notified by fast retransmit: threshold drops to half CONGESTION WINDOW • CONGESTION WINDOW back to new threshold

  36. AIMD • Threshold • Adaptive • Parameter in addition to the actual and the congestion window • Assumption • Threshold, i.e. adaptation to the network: “sensible window size” • Use: on missing acknowledgements • Threshold is set to half of current congestion window • Congestion window is reduced • Implementation- and situation-dependant: to 1 or to new threshold • Use slow start of congestion window is below threshold • Use: on timeout • Threshold is set to half of current congestion window • Congestion window is reset to one maximum segment • Use slow start to determine what the network can handle • Exponential growth stops when threshold is hit • From there congestion window grows linearly (1 segment) on successful transmission

  37. TCP Congestion Control • Some parameters • 65.536 byte max. per segment • IP recommended value TTL interval 2 min • Optimization for low throughput rate • Problem • 1 byte data requires 162 byte incl. ACK (if, at any given time, it shows up just by itself) • Algorithm • Acknowledgment delayed by 500 msec because of window adaptation • Comment • Often part of TCP implementation

  38. TCP Congestion Control • TCP assumes that every loss is an indication for congestion • Not always true • Packets may be discarded because of bit errors • Low bit error rates • Optical fiber • Copper cable under normal conditions • Mobile phone channels (link layer retransmission) • High bit errors rates • Modem cables • Copper cable in settings with high background noise • HAM radio (IP over radio) • TCP variations exist

  39. TCP Congestion Control • TCP congestion control is based on the notion that the network is a “black box” • Congestion indicated by a loss • Sufficient for best-effort applications, but losses might severely hurt traffic like audio and video streams  congestion indication better enabling features like quality adaptation • Approaches • Use ACK rate rather than losses for bandwidth estimation • Example: TCP Westwood • Use active queue management to detect congestion

  40. Internet Congestion Control TCP Congestion Avoidance 52

  41. Random Early Detection (RED) • Random Early Detection (discard/drop) (RED) uses active queue management • Drops packet in an intermediate node based on average queue length exceeding a threshold • TCP receiver reports loss in ACK • Sender applies multiple decrease • Idea • Congestion should be attacked as early as possible • Some transport protocols (e.g., TCP) react to lost packets by rate reduction

  42. Random Early Detection (RED) • Router drops some packet before congestion significant (i.e., early) • Gives time to react • Dropping starts when moving avg. of queue length exceeds threshold • Small bursts pass through unharmed • Only affects sustained overloads • Packet drop probability is a function of mean queue length • Prevents severe reaction to mild overload • RED improves performance of a network of cooperating TCP sources • No bias against bursty sources • Controls queue length regardless of endpoint cooperation

  43. Early Congestion Notification (ECN) • Early Congestion Notification (ECN) - RFC 2481 • an end-to-end congestion avoidance mechanism • Implemented in routers and supported by end-systems • Not multimedia-specific, but very TCP-specific • Two IP header bits used • ECT - ECN Capable Transport, set by sender • CE - Congestion Experienced, may be set by router • Extends RED • if packet has ECT bit set • ECN node sets CE bit • TCP receiver sets ECN bit in ACK • sender applies multiple decrease (AIMD) • else • Act like RED

  44. Early Congestion Notification (ECN) • Effects • Congestion is not oscillating - RED & ECN • ECN-packets are never lost on uncongested links • Receiving an ECN mark means • TCP window decrease • No packet loss • No retransmission

  45. Endpoint Admission Control • Motivation • Let end-systems test whether a desired throughput can be supported • In case of success, start transmission • Applicability • Only for some kinds of traffic (traffic classes) • Inelastic flows • Requires exclusive use of some resources for this traffic • Assumes short queues in that traffic class • Send probes at desired rate • Routers can mark or drop probes • Probe packets can have separate queues or use main queue

  46. Endpoint Admission Control • Thrashing and Slow Start Probing • Thrashing • Many endpoints probe concurrently • Probes interfere with each other and all deduce insufficient bandwidth • Bandwidth is underutilized • Slow start probing • Probe for small bandwidth • Probe for twice the amount of bandwidth • … • Until desired speed is reached • Start sending

  47. Round-trip-time Desired congestion window Feedback XCP: eXplicit Control Protocol Provide feedback initialize update IP header XCP TCP header Payload

  48. Congestion Controller Goal: Match input traffic to link capacity Compute an average RTT for all connections Looks at queue Combined traffic changes by   ~ Spare Bandwidth  ~ - Queue Size sendable per RTT So,  =  Spare -  Queue Fairness Controller Goal: Divide  between flows to converge to fairness Looks at a state in XCP header If  > 0  Divide  equally between flows If  < 0 Divide  between flows proportionally to their current rates XCP: eXplicit Control Protocol

  49. TCP Friendliness 61

  50. TCP Friendliness - TCP Compatible • A TCP connection’s throughput is bounded • wmax - maximum retransmission window size • RTT - round-trip time • Congestion windows size changes • AIMD (additive increase, multiple decrease) algorithm • TCP is said to be fair • Streams that share a path will reach an equal share • A protocol is TCP-friendly if • Colloquial • It long-term average throughput is not bigger than TCP’s • Formal • Its arrival rate is at most some constant over the square root of the packet loss rate

More Related