1 / 69

Week 11 TCP Congestion Control

Week 11 TCP Congestion Control. Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control! manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem!.

hetal
Download Presentation

Week 11 TCP Congestion Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Week 11TCP Congestion Control

  2. Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control! manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem! Principles of Congestion Control

  3. two senders, two receivers one router, infinite buffers no retransmission large delays when congested maximum achievable throughput lout lin : original data unlimited shared output link buffers Host A Host B Causes/costs of congestion: scenario 1

  4. one router, finite buffers sender retransmission of lost packet Causes/costs of congestion: scenario 2 Host A lout lin : original data l'in : original data, plus retransmitted data Host B finite shared output link buffers

  5. always: (goodput) “perfect” retransmission only when loss: retransmission of delayed (not lost) packet makes larger (than perfect case) for same l l l > = l l l R/2 in in in R/2 R/2 out out out R/3 lout lout lout R/4 R/2 R/2 R/2 lin lin lin a. b. c. Causes/costs of congestion: scenario 2 “costs” of congestion: • more work (retrans) for given “goodput” • unneeded retransmissions: link carries multiple copies of pkt

  6. four senders multihop paths timeout/retransmit l l in in Host A Host B Causes/costs of congestion: scenario 3 Q:what happens as and increase ? lout lin : original data l'in : original data, plus retransmitted data finite shared output link buffers

  7. Example What happens when each demand peaks at unity rate? Throughput = 1.52? (How) twice the unity rate T = 1.07?

  8. Max-min fair allocation • Given a network and a set of sessions we would like to find a maximal flow that it is fair • We will see different definitions for max-min fairness and will learn a flow control algorithm • The tutorial will give understanding what is max-min fairness

  9. How define fairness? • Any session is entitled to as much network use as is any other. • Allocating the same share to all.

  10. Max-Min Flow Control Rule • The rule is maximizing the network use allocated to the sessions with the minimum allocation • An alternative definition: is to maximize the allocation of each session i under constraint that an increase in i’s allocation doesn’t cause a decrease in some other session allocation with the same or smaller rate than i

  11. Example • Maximal fair flow division will be to give for the sessions 0,1,2 a flow rate of 1/3 and for the session 3 a flow rate of 2/3 Session 3 Session 2 Session 1 C=1 C=1 Session 0

  12. Notation • G=(N,A) - Directed network graph (N is set of vertexes and A is set of edges) • Ca – the capacity of a link a • Fa – the flow on a link a • P – a set of the sessions • rp – the rate of a session p • We assume a fixed, single-path routing method

  13. Definitions • We have following constraints on the vector r= {rp | p Є P} • A vector r satisfying these constraints is said to be feasible

  14. Definitions • A vector of rates r is said to be max-min fair if is a feasible and for each p Є P, rp can not be increased while maintaining feasibility without decreasing rp’ for some session p’ for which rp’ ≤ rp • We want to find a rate vector that is max-min fair

  15. d 4:1 b 1:2/3 1 4 5 c 5:1/3 2 3 3:1/3 2:1/3 a Bottleneck Link for a Session • Given some feasible flow r, we say that a is a bottleneck linkwith respect to r, for a session p crossing a if Fa = Caand rp ≥ rp’for all sessions p’ crossing link a. All link capacity is 1. Bottlenecks for 1,2,3,4,5 respectively are: c,a,a,d,a Note: c is not a bottleneck for 5 and b is not a bottleneck for 1

  16. Max-Min Fairness Definition Using Bottleneck • Theorem: A feasible rate vector r is max-min fair if and only if each session has a bottleneck link with respect to r

  17. Algorithm for Computing Max-Min Fair Rate Vectors • The idea of the algorithm: • Bring all the sessions to the state that they have a bottleneck link and then according to theorem it will be the maximal fair flow • We start with all-zero rate vector and to increase rates on all paths together until Fa = Cafor one or more links a. • At this point, each session using a saturated link has the same rate as every other session using this link. Thus, these saturated links serve as bottleneck links for all sessions using them

  18. Algorithm for Computing Max-Min Fair Rate Vectors • At the next step, all sessions not using the saturated links are incremented equally in rate until one or more new links become saturated • Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate) • The algorithm continues from step to step, always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link

  19. Algorithm for Computing Max-Min Fair Rate Vectors Init: k=1, Fa0=0, rp0=0, P1=P and A1=A • For all aA, nak:= num of sessions pPk crossing link a • Δr=minaAk(Ca-Fak-1)/nak (find inc size) • For all p  Pk, rpk:=rpk-1+ Δr, (increment)for other p, rpk:=rpk-1 • Fak:=Σp crossing arpk (Update flow) • Ak+1:= The set of unsaturated links. • Pk+1:=all p’s, such that p cross only links in Ak+1 • k:=k+1 • If Pk is empty then stop, else goto 1

  20. d 4:1 b 1:2/3 1 4 5 c 5:1/3 2 3 3:1/3 2:1/3 a All link capacity is 1 Example of Algorithm Running • Step 1: All sessions get a rate of 1/3, because of a and the link a is saturated. • Step 2: Sessions 1 and 4 get an additional rate increment of 1/3 for a total of 2/3. Link c is saturated now. • Step 3: Session 4 gets an additional rate increment of 1/3 for a total of 1. Link d is saturated. • End

  21. Example revisited • Max-min fair vector if Tij = ∞ • r = (½ ½ ½ ½ ) • T = 2 > 1.52 • What if the demands T13 and T31 = ¼ , T24 = ½, T42 = ∞ • r = (¼ ½ ¼ ¾)

  22. Host A Host B Causes/costs of congestion: scenario 3 lout Another “cost” of congestion: • when packet dropped, any “upstream transmission capacity used for that packet was wasted!

  23. End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate sender should send at Approaches towards congestion control Two broad approaches towards congestion control:

  24. ABR: available bit rate: “elastic service” if sender’s path “underloaded”: sender should use available bandwidth if sender’s path congested: sender throttled to minimum guaranteed rate RM (resource management) cells: sent by sender, interspersed with data cells bits in RM cell set by switches (“network-assisted”) NI bit: no increase in rate (mild congestion) CI bit: congestion indication RM cells returned to sender by receiver, with bits intact Case study: ATM ABR congestion control

  25. two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell sender’ send rate thus minimum supportable rate on path EFCI bit in data cells: set to 1 in congested switch if data cell preceding RM cell has EFCI set, sender sets CI bit in returned RM cell Case study: ATM ABR congestion control

  26. end-end control (no network assistance) sender limits transmission: LastByteSent-LastByteAcked  CongWin Roughly, CongWin is dynamic, function of perceived network congestion How does sender perceive congestion? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event three mechanisms: AIMD slow start conservative after timeout events CongWin rate = Bytes/sec RTT TCP Congestion Control

  27. multiplicative decrease: cut CongWin in half after loss event TCP AIMD additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing Long-lived TCP connection

  28. Additive Increase • increase CongWin by 1 MSS every RTT in the absence of loss events: probing • cwnd += SMSS*SMSS/cwnd (*) • This adjustment is executed on every incoming non-duplicate ACK. • Equation (*) provides an acceptable approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT.

  29. When connection begins, CongWin = 1 MSS Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps available bandwidth may be >> MSS/RTT desirable to quickly ramp up to respectable rate TCP Slow Start • When connection begins, increase rate exponentially fast until first loss event

  30. When connection begins, increase rate exponentially until first loss event: double CongWin every RTT done by incrementing CongWin for every ACK received Summary: initial rate is slow but ramps up exponentially fast time TCP Slow Start (more) Host A Host B one segment RTT two segments four segments

  31. After 3 dup ACKs: CongWin is cut in half, Threshold is set to CongWin window then grows linearly But after timeout event: Threshold set to CongWin/2 and CongWin instead set to 1 MSS; window then grows exponentially to a threshold, then grows linearly Refinement Philosophy: • 3 dup ACKs indicates network capable of delivering some segments • timeout before 3 dup ACKs is “more alarming”

  32. Q: When should the exponential increase switch to linear? A: When CongWin gets to 1/2 of its value before timeout. Implementation: Variable Threshold At loss event, Threshold is set to 1/2 of CongWin just before loss event Refinement (more)

  33. Summary: TCP Congestion Control • When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. • When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. • When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. • When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.

  34. TCP sender congestion control

  35. TCP Futures • Example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput • Requires window size W = 83,333 in-flight segments • Throughput in terms of loss rate: • p = 2·10-10 • New versions of TCP for high-speed needed!

  36. Macroscopic TCP model Deterministic packet losses 1/p packets transmitted in a cycle success loss

  37. TCP Model Contd Equate the trapozeid area 3/8 W2 under to 1/p

  38. Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 bottleneck router capacity R TCP connection 2 TCP Fairness

  39. Two competing sessions: Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally Why is TCP fair? equal bandwidth share R loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R

  40. Fairness and UDP Multimedia apps often do not use TCP do not want rate throttled by congestion control Instead use UDP: pump audio/video at constant rate, tolerate packet loss Research area: TCP friendly, DCCP Fairness and parallel TCP connections nothing prevents app from opening parallel cnctions between 2 hosts. Web browsers do this Example: link of rate R supporting 9 cnctions; new app asks for 1 TCP, gets rate R/10 new app asks for 10 TCPs, gets R/2 ! Fairness (more)

  41. Queuing Disciplines • Each router must implement some queuing discipline • Queuing allocates both bandwidth and buffer space: • Bandwidth: which packet to serve (transmit) next • Buffer space: which packet to drop next (when required) • Queuing also affects latency

  42. Typical Internet Queuing • FIFO + drop-tail • Simplest choice • Used widely in the Internet • FIFO (first-in-first-out) • Implies single class of traffic • Drop-tail • Arriving packets get dropped when queue is full regardless of flow or importance • Important distinction: • FIFO: scheduling discipline • Drop-tail: drop policy

  43. FIFO + Drop-tail Problems • Leaves responsibility of congestion control completely to the edges (e.g., TCP) • Does not separate between different flows • No policing: send more packets  get more service • Synchronization: end hosts react to same events

  44. FIFO + Drop-tail Problems • Full queues • Routers are forced to have have large queues to maintain high utilizations • TCP detects congestion from loss • Forces network to have long standing queues in steady-state • Lock-out problem • Drop-tail routers treat bursty traffic poorly • Traffic gets synchronized easily  allows a few flows to monopolize the queue space

  45. Active Queue Management • Design active router queue management to aid congestion control • Why? • Router has unified view of queuing behavior • Routers see actual queue occupancy (distinguish queue delay and propagation delay) • Routers can decide on transient congestion, based on workload

  46. Design Objectives • Keep throughput high and delay low • High power (throughput/delay) • Accommodate bursts • Queue size should reflect ability to accept bursts rather than steady-state queuing • Improve TCP performance with minimal hardware changes

  47. Lock-out Problem • Random drop • Packet arriving when queue is full causes some random packet to be dropped • Drop front • On full queue, drop packet at head of queue • Random drop and drop front solve the lock-out problem but not the full-queues problem

  48. Full Queues Problem • Drop packets before queue becomes full (early drop) • Intuition: notify senders of incipient congestion • Example: early random drop (ERD): • If qlen > drop level, drop each new packet with fixed probability p • Does not control misbehaving users

  49. Random Early Detection (RED) • Detect incipient congestion • Assume hosts respond to lost packets • Avoid window synchronization • Randomly mark packets • Avoid bias against bursty traffic

  50. RED Algorithm • Maintain running average of queue length • If avg < minth do nothing • Low queuing, send packets through • If avg > maxth, drop packet • Protection from misbehaving sources • Else mark packet in a manner proportional to queue length • Notify sources of incipient congestion

More Related