1 / 16

The Buffer - Bandwidth Tradeoff in Lossless Networks

The Buffer - Bandwidth Tradeoff in Lossless Networks. Alex Shpiner, Eitan Zahavi, Ori Rottenstreich. April 2014. Background: Network Incast Congestion. 100%. 300%. 100%. 100%. Source : http:// theithollow.com /2013/05/flow-control-explained/. Background: Pause Frame Flow Control. Buffer.

stu
Download Presentation

The Buffer - Bandwidth Tradeoff in Lossless Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Buffer - Bandwidth Tradeoff in Lossless Networks Alex Shpiner, Eitan Zahavi, Ori Rottenstreich April 2014

  2. Background: Network Incast Congestion 100% 300% 100% 100% Source: http://theithollow.com/2013/05/flow-control-explained/

  3. Background: Pause Frame Flow Control Buffer Source: http://theithollow.com/2013/05/flow-control-explained/

  4. Background: Incast with Pause Frame Flow Control 33% 33% 100% 33% Source: http://theithollow.com/2013/05/flow-control-explained/

  5. Background: Congestion Spreading Problem This flow is also paused, since the pause control does not distinguish between flows. • Small buffers Link pauses Congestion spreading Effective link bandwidth decrease • To deal with Incast we can: • Increase buffers • Increase link bandwidth 33% instead of 66% Effective link bandwidth= Link bandwidth * %unpaused Trade-off Source: http://theithollow.com/2013/05/flow-control-explained/

  6. Buffer-bandwidth Tradeoff • Higher bandwidth allows: • Faster buffer draining • Pausing link for longer periods, while achieving same effective bandwidth. reduced buffering demand. • Aim: evaluate the buffer-bandwidth tradeoff. • Assumptions: • Lossless network • Congestion spreading avoidance is desired Effective bandwidth = Link bandwidth * %unpaused

  7. Evaluation Flow 1). Assumenetwork with links of bandwidth C and buffers of size B 2). Define the most challenging workload the network can handle without congestion spreading 3). Increaselinks bandwidth by α and reducebuffer size by β Transmit the workload at the same rate, while keeping the same effective link bandwidth 4). Evaluate the relation between α and βthat able to handle the workload from step 2.

  8. Network Model The most challenging workload: Synchronized Bursts

  9. Determining the Workload Parameters Arrival rate • Assumption: output link is 100% utilized. • Burst length • It takes to fill buffer of size at arrival rate and departure rate : N senders Queue size

  10. Effect of Buffer Size Reduction with Link Acceleration Arriving traffic Input Queue Size • Conclusions: • We can push more traffic. • When the buffer is full, the link is in paused mode: • (congestion spreading) Input Queue Size

  11. Effect of Buffer Size Reduction with Link Acceleration– cont. • When buffer is full, the link is in pausedmode: • Paused mode congestion spreading • By how much the buffer can be reduced (β) to avoid congestion spreading () ? • For (incast): • 40% of buffer saving!!!  • For (incast): • only 5% of buffer saving ; ;

  12. Effect of Buffer Size Reduction with Link Acceleration– cont. • Observation: we are allowed to pause the link, since we increased the link capacity. • For : • Using (1) and (2) . • For !!! • We can save 47% of buffer size with 40% of link rate increase, to get the same performance! • And we can also push more data in total (e.g. 56Gbps vs 40Gbps) • with the congestion spreading cost ; ;

  13. Asymptotic Analysis For no buffering is required in the switches! By increasing link bandwidth by X% the buffer saving is at least X%for any incast load! α N

  14. Multiple IncastCascade Analysis Last rank defines the workload parameters: Rank 2 Rank 1 Rank n Rank n, α>1, β<1: The analysis is similar to a single-rank case, but now the traffic arrives at rate

  15. Conclusions • We can reduce switch buffer size, while still pushing the same traffic. • But, we pay with: • Congestion spreading (pause frames) • Buffers at the traffic sources (NICs) • By increasing the links bandwidth, we can reduce the congestion spreading • And push more traffic. • We can save X% of buffer size with X% of link rate increase (for any incast). • When increasing the links bandwidth by a factor of at least 2 (no buffering is required at the switches. • The results hold also for the multiple incast cascade.

More Related