1 / 34

Router-assisted congestion control

Router-assisted congestion control. Lecture 8 CS 653, Fall 2010. TCP congestion control performs poorly as bandwidth or delay increases. Shown analytically in [Low01] and via simulations. Avg. TCP Utilization. Avg. TCP Utilization. 50 flows in both directions Buffer = BW x Delay

caraf
Download Presentation

Router-assisted congestion control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Router-assisted congestion control Lecture 8 CS 653, Fall 2010

  2. TCP congestion control performs poorly as bandwidth or delay increases Shown analytically in [Low01] and via simulations Avg. TCP Utilization Avg. TCP Utilization 50 flows in both directions Buffer = BW x Delay RTT = 80 ms 50 flows in both directions Buffer = BW x Delay BW = 155 Mb/s • Because TCP lacks fast response • Spare bandwidth is available  TCP increases • by 1 pkt/RTT even if spare bandwidth is huge • When a TCP starts, it increases exponentially •  Too many drops  Flows ramp up by 1 pkt/RTT, • taking forever to grab the large bandwidth Bottleneck Bandwidth (Mb/s) Round Trip Delay (sec)

  3. Proposed Solution: Decouple Congestion Control from Fairness High Utilization; Small Queues; Few Drops Bandwidth Allocation Policy

  4. Coupled because a single mechanism controls both Example: In TCP, Additive-Increase Multiplicative-Decrease (AIMD) controls both How does decoupling solve the problem? • To control congestion: use MIMD which shows fast response • To control fairness: use AIMD which converges to fairness Proposed Solution: Decouple Congestion Control from Fairness

  5. Characteristics ofSolution • Improved Congestion Control (in high bandwidth-delay & conventional environments): • Small queues • Almost no drops • Improved Fairness • Flexible bandwidth allocation: min-max fairness, proportional fairness, differential bandwidth allocation,… • Scalable (no per-flow state)

  6. XCP: An eXplicit Control Protocol • Congestion Controller • Fairness Controller

  7. Round Trip Time Round Trip Time Congestion Window Congestion Window Feedback Feedback Congestion Header How does XCP Work? Feedback = + 0.1 packet

  8. Round Trip Time Congestion Window Feedback = + 0.1 packet How does XCP Work? Feedback = - 0.3 packet

  9. How does XCP Work? Congestion Window = Congestion Window + Feedback XCP extends ECN and CSFQ Routers compute feedback without any per-flow state

  10. Congestion Controller Fairness Controller Goal: Matches input traffic to link capacity & drains the queue Goal: Divides  between flows to converge to fairness Looks at a flow’s state in Congestion Header Looks at aggregate traffic & queue AIMD MIMD • Algorithm: • Aggregate traffic changes by   ~ Spare Bandwidth • ~ - Queue Size So,  =  davg Spare -  Queue Algorithm: If  > 0  Divide  equally between flows If  < 0 Divide  between flows proportionally to their current rates How Does an XCP Router Compute the Feedback? Congestion Controller Fairness Controller

  11. Algorithm: If  > 0  Divide  equally between flows If  < 0 Divide  between flows proportionally to their current rates  =  davg Spare -  Queue Theorem:System converges to optimal utilization (i.e., stable) for any link bandwidth, delay, number of sources if: Need to estimate number of flows N RTTpkt : Round Trip Time in header Cwndpkt : Congestion Window in header T: Counting Interval (Proof based on Nyquist Criterion) Getting the devil out of the details … Congestion Controller Fairness Controller No Per-Flow State

  12. Implementation Implementation uses few multiplications & additions per packet Practical! Liars? • Policing agents at edges of the network or • statistical monitoring • Easier to detect than in TCP Gradual Deployment XCP can co-exist with TCP and can be deployed gradually

  13. Performance

  14. S1 Bottleneck S2 R1, R2, …, Rn Sn Subset of Results Similar behavior over:

  15. XCP Remains Efficient as Bandwidth or Delay Increases Utilization as a function of Delay Utilization as a function of Bandwidth Avg. Utilization Avg. Utilization Bottleneck Bandwidth (Mb/s) Round Trip Delay (sec)

  16.  and  chosen to make XCP robust to delay XCP increases proportionally to spare bandwidth XCP Remains Efficient as Bandwidth or Delay Increases Utilization as a function of Bandwidth Utilization as a function of Delay Avg. Utilization Avg. Utilization Bottleneck Bandwidth (Mb/s) Round Trip Delay (sec)

  17. Start 40 Flows Start 40 Flows Stop the 40 Flows Stop the 40 Flows XCP Shows Faster Response than TCP XCP shows fast response!

  18. XCP Deals Well with Short Web-Like Flows Average Utilization Average Queue Drops Arrivals of Short Flows/sec

  19. (RTT is 40 ms 330 ms ) XCP is Fairer than TCP Same RTT Different RTT Avg. Throughput Avg. Throughput Flow ID Flow ID

  20. XCP Summary • XCP • Outperforms TCP • Efficient for any bandwidth • Efficient for any delay • Scalable • Benefits of Decoupling • Use MIMD for congestion control which can grab/release large bandwidth quickly • Use AIMD for fairness which converges to fair bandwidth allocation

  21. XCP Pros and Cons • Long-lived flows: Works well • Convergence to fair share rates, high link utilization, small queue, low loss • Mix of flow lengths: Deviates from processor sharing • Non-trivial convergence time • Flow durations longer

  22. ABR: available bit rate: “Elastic service” If sender’s path “underloaded”: sender should use available bandwidth If sender’s path congested: sender throttled to minimum guaranteed rate RM (resource management) cells: Sent by sender, interspersed with data cells Bits in RM cell set by switches (“network-assisted”) NI bit: no increase in rate (mild congestion) CI bit: congestion indication RM cells returned to sender by receiver, with bits intact ATM ABR congestion control

  23. Two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell sender’ send rate thus minimum supportable rate on path EFCI bit in data cells: set to 1 in congested switch if data cell preceding RM cell has EFCI set, sender sets CI bit in returned RM cell ATM ABR congestion control

  24. ATM ERICA Switch Algorithm • ERICA: Explicit rate indication for congestion avoidance goals: • Utilization: allocate all available capacity to ABR flows • Queueing delay: keep queue small • Fairness: max-min “sought only after utilization achieved” (decoupled from utilization?) • Stability, ie reaches steady-state, and robustness, ie graceful degradation, when no steady-state

  25. Initialization MaxAllocPrev = MaxAllocCur = FairShare End of avg’ing interval Total ABR Cap. = Link Cap. - VBR Cap. Target ABR Cap. = Fraction*Tot. ABR Cap. Z = ABR Input rate FairShare = Target ABR Cap. / # Active VCs Goto Initialization During congestion VCShare = VCRate/Z If (Z > 1+±) ER = max(FairShare, VCShare) Else ER = max(MaxAllocPrev, VCShare) MaxAllocCur = max(MaxAllocCur, ER) If (ER > FairShare and VCRate < FairShare) ER = FairShare ERICA: Setting explicit rate (ER)

  26. ABR vs. XCP or RCP? • Similarities? • Differences?

More Related