1 / 22

Congestion Control

Congestion Control. UIUC CS438: Communication Networks Summer 2014 Fred Douglas Slides: Fred , Kurose&Ross ( some edited ). 3.1 transport-layer services 3.2 multiplexing and demultiplexing 3.3 connectionless transport: UDP 3.4 principles of reliable data transfer.

cleary
Download Presentation

Congestion Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion Control UIUC CS438: Communication Networks Summer 2014 Fred Douglas Slides: Fred, Kurose&Ross(some edited)

  2. 3.1 transport-layer services 3.2 multiplexing and demultiplexing 3.3 connectionless transport: UDP 3.4 principles of reliable data transfer 3.5 connection-oriented transport: TCP segment structure reliable data transfer flow control connection management 3.6 principles of congestion control 3.7 TCP congestion control Chapter 3 outline

  3. The Situation • A network of hosts+routers • connected by links with finite bandwidth • routers have finite queues, to avoid (when possible) dropping packets during bursts of traffic 1Mbps 10Mbps 20Mbps 100Mbps 1Mbps 10Mbps finite queue

  4. congestion: “too many sources sending data faster than the network can handle” manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) possible solutions: network drops our packets network tells us to slow down we observe performance, and decide to slow down centralized scheduling workable solutions: 1+2, 1+3, 2+3, (4) Principles of congestion control Not good enough (congestion collapse) Danger of deadlocks if we don’t drop packets How? Not on the whole internet!

  5. two senders, fixed rate one router infinite buffers finite link capacity R no retransmission (Assuming Poisson process traffic) maximum per-connection throughput: R/2 R/2 lout delay lin R/2 lin R/2 The costs of congestion: scenario 1 original data: lin throughput:lout Host A unlimited shared output link buffers Host B • Large delays as arrival rate, lin, approaches capacity(Little’s Law)

  6. one router, finite buffers sender retransmission of timed-out packet application-layer input = application-layer output:lin= lout transport-layer input includes retransmissions :linlin wasted bandwidth! (goodput vs throughput) The costs of congestion: scenario 2 ‘ lin: original data lout l'in:original data, plus retransmitted data Host A finite shared output link buffers Host B

  7. four senders multihop paths timeout/retransmit The costs of congestion: worst-case scenario 2 Q:what happens as linand lin’ increase ? A:as red lin’ increases, all arriving blue pkts at upper queue are dropped, blue throughput g 0 lout Host A lin: original data Host B l'in:original data, plus retransmitted data finite shared output link buffers Host D Host C

  8. The costs of congestion: worst-case scenario 2 C/2 lout lin’ C/2 • When a packet is dropped, any upstream transmission capacity used for that packet was wasted!

  9. end-end congestion control: no explicit feedback from network (just Random Early Drops) congestion inferred from end-system observed loss, delay This is TCP’s approach network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (TCP/IP ECN) explicit rate for sender to send at(XCP, RCP) Approaches towards congestion control two broad approaches towards congestion control:

  10. 3.1 transport-layer services 3.2 multiplexing and demultiplexing 3.3 connectionless transport: UDP 3.4 principles of reliable data transfer 3.5 connection-oriented transport: TCP segment structure reliable data transfer flow control connection management 3.6 principles of congestion control 3.7 TCP congestion control Chapter 3 outline

  11. sender limits transmission: cwnd is dynamic, function of perceived network congestion TCP sending rate: roughly: send cwnd bytes, wait RTT for ACKS, then send more bytes < cwnd ~ ~ RTT TCP Congestion Control sender sequence number space cwnd last byte ACKed last byte sent sent, not-yet ACKed (“in-flight”) rate bytes/sec LastByteSent- LastByteAcked cwnd

  12. TCP Reno: additive increase, multiplicative decrease • approach:senderincreases transmission rate (window size), probing for usable bandwidth, until loss occurs • additive increase: increase cwndby 1 MSS every RTT until loss detected • multiplicative decrease: cut cwnd in half after loss additively increase window size … …. until loss occurs (then cut window in half) AIMD saw tooth behavior: probing for bandwidth cwnd: TCP sender congestion window size time

  13. when connection begins, increase rate exponentially until first loss event: initially cwnd = 1 MSS double cwnd every RTT done by incrementing cwnd for every ACK received summary:initial rate is slow but ramps up exponentially fast time TCP Slow Start Host B Host A one segment RTT two segments four segments

  14. loss indicated by timeout: cwnd set to 1 MSS; window then grows exponentially (as in slow start) to threshold, then grows linearly loss indicated by 3 duplicate ACKs: TCP RENO dup ACKs indicate network capable of delivering some segments cwnd is cut in half window then grows linearly TCP Tahoe always sets cwnd to 1 (timeout or 3 duplicate acks) TCP: detecting, reacting to loss

  15. Q: when should the exponential increase switch to linear? A: when cwnd gets to 1/2 of its value before timeout. Implementation: variable ssthresh on loss event, ssthresh is set to 1/2 of cwndjust before loss event TCP: switching from slow start to CA

  16. new ACK . cwnd = cwnd+MSS dupACKcount = 0 transmit new segment(s), as allowed new ACK L cwnd = cwnd + MSS (MSS/cwnd) dupACKcount = 0 transmit new segment(s), as allowed cwnd = 1 MSS ssthresh = 64 KB dupACKcount = 0 cwnd > ssthresh L timeout ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment slow start congestion avoidance timeout dupACKcount == 3 dupACKcount == 3 ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment timeout ssthresh= cwnd/2 cwnd = ssthresh + 3 retransmit missing segment duplicate ACK duplicate ACK ssthresh= cwnd/2 cwnd = ssthresh + 3 retransmit missing segment ssthresh = cwnd/2 cwnd = 1 dupACKcount = 0 retransmit missing segment dupACKcount++ dupACKcount++ fast recovery New ACK duplicate ACK cwnd = cwnd + MSS transmit new segment(s), as allowed New ACK! New ACK! New ACK! cwnd = ssthresh dupACKcount = 0 Summary: TCP Congestion Control

  17. 3 W avg TCP thruput = bytes/sec RTT 4 W W/2 TCP throughput • avg. TCP thruput as function of window size, RTT? • ignore slow start, assume always data to send • W: window size (measured in bytes) where loss occurs • avg. window size (# in-flight bytes) is ¾ W • avg. thruput is 3/4W per RTT

  18. fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP Fairness TCP connection 1 bottleneck router capacity R TCP connection 2

  19. two competing sessions: additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally Why is TCP fair? equal bandwidth share R loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R http://youtu.be/-RlMM4NjUls?t=1m23s

  20. Fairness and UDP multimedia apps often do not use TCP do not want rate throttled by congestion control instead use UDP: send audio/video at constant rate, tolerate packet loss (and maybe drop quality if loss is too bad) Fairness, parallel TCP connections application can open multiple parallel connections between two hosts web browsers do this e.g., link of rate R with 9 existing connections: new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 Fairness (more)

  21. . 1.22 MSS TCP Reno throughput = RTT L TCP Futures: TCP over “long, fat pipes” • example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput • requires W = 83,333 in-flight segments • throughput in terms of segment loss probability, L [Mathis 1997]: ➜ to achieve 10 Gbps throughput, need a loss rate of L = 2·10-10 – a very small loss rate! • new versions (like a billion) of TCP for high-BDP • TCP CUBIC (now default in Linux)

  22. TCP CUBIC • Cubic function of time since last congestion event • Benefits: • Quickly recover after dropping • Caution near previous loss point keeps our window large for longer (cf. TCP) • Rapid increase in“new territory” takes advantage of major changes (like a competing flow quitting)

More Related