1 / 23

Sharing Cloud Networks

Sharing Cloud Networks. Lucian Popa, Gautam Kumar, Mosharaf Chowdhury Arvind Krishnamurthy, Sylvia Ratnasamy, Ion Stoica. UC Berkeley. State of the Cloud. Network???. Guess the Share. Per-VM Proportional. TCP Per flow. Per Source. Per Destination. 3 : 1. 1 : 1. 2 : 1. 3 : 2.

grant-henry
Download Presentation

Sharing Cloud Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sharing Cloud Networks Lucian Popa, Gautam Kumar, Mosharaf Chowdhury Arvind Krishnamurthy, Sylvia Ratnasamy, Ion Stoica UC Berkeley

  2. State of the Cloud Network???

  3. Guess the Share Per-VM Proportional TCP Per flow Per Source Per Destination 3 : 1 1 : 1 2 : 1 3 : 2 A2 A1 A3 B1 B2 Link L Alice : Bob = ? : ?

  4. Challenges Network share of a virtual machine (VM) V depends on • Collocated VMs, • Placement of destination VMs, and • Cross-traffic on each link used by V Network differs from CPU or RAM • Distributed resource • Usage attribution (source, destination, or both?) Traditional link sharing concepts needs rethinking

  5. Requirements High Utilization Aggregate Proportionality Min Bandwidth Guarantee Do not leave bandwidth unused if there is demand Introduce performance predictability Network shares proportional to the number of VMs

  6. Requirement 1:Guaranteed Minimum B/W BminA BminX BminB Provides a minimum b/w guarantee for each VM Captures the desire of tenants to get performance isolation for their applications … A B X All VMs

  7. Requirement 2:Aggregate Proportionality Shares network resources across tenants in proportion to the number of their VMs Captures payment-proportionality • Similar to other resources like CPU, RAM etc. Desirable properties • Strategy-proofness: Allocations cannot be gamed • Symmetry: Reversing directions of flows does not change allocation

  8. Design Space High Utilization Aggregate Proportionality Min Bandwidth Guarantee Symmetry Strategy-Proofness

  9. Requirement 3:High Utilization Provides incentives such that throughput is only constrained by the network capacity • Not by the inefficiency of the allocation or by disincentivizing users to send traffic Desirable properties • Work Conservation: Full utilization of bottleneck links • Independence: Independent allocation of one VM’s traffic across independent paths

  10. Design Space Tradeoff 1 High Utilization Aggregate Proportionality Min Bandwidth Guarantee Work Conservation Symmetry Strategy-Proofness Independence

  11. Tradeoff 1: Min B/W vs. Proportionality C/2 C/3 A2 A1 B1 B2 2C/3 C/2 B3 Link L with Capacity C Share of Tenant A can decrease arbitrarily!

  12. Design Space Tradeoff 2 Tradeoff 1 High Utilization Aggregate Proportionality Min Bandwidth Guarantee Work Conservation Symmetry Strategy-Proofness Independence

  13. Tradeoff 2:Proportionality vs. Utilization A1 A2 C/4 A3 A4 C/4 C/4 B1 B2 Link L with Capacity C C/4 B3 B4 To maintain proportionality, equal amount of traffic must be moved from A1-A2 to A1-A3 => Underutilization of A1-A3

  14. Per-link Proportionality Restrict to congested links only Share of a tenant on a congested link is proportional to the number of its VMs sending traffic on that link

  15. Per Endpoint Sharing (PES) Five identical VMs (with unit weights) sharing a Link L A2 A1 A3 B1 B2 Link L

  16. Per Endpoint Sharing (PES) Resulting weights of the three flows: WA WB + WA-B = NA NB 3/2 A2 NA = 2 A1 3/2 A3 2 B1 B2 Link L To generalize, weight of a flow A-B on link L is . .

  17. Per Endpoint Sharing (PES) Symmetric Proportional • sum of weights of flows of a tenant on a link L = sum of weights of its VMs communicating on that link Work Conserving Independent Strategy-proof on congested links WA WB + = WB-A WA-B = NA NB

  18. Generalized PES L A B Scale weight of A by α Scale weight of B by β WA WB WA-B= WB-A=α +β NA NB α > β if L is more important to A than to B (e.g., A’s access link)

  19. One-Sided PES (OSPES) L A B Scale weight of A by α Scale weight of B by β WA WB WA-B= WB-A=α +β NA NB Highest B/W Guarantee *In the Hose Model Per Source closer to source Per Destination closer to destination α = 1, β = 0 if A is closer to L α = 0, β = 1 if B is closer to L

  20. Comparison

  21. Full-bisection B/W Network OSPES Tenant 1 has one-to-one communication pattern Tenant 2 has all-to-all communication pattern

  22. MapReduce Workload OSPES W1:W2:W3:W4:W5 = 1:2:3:4:5

  23. Summary Sharing cloud networks is all about making tradeoffs • Min b/w guarantee VS Proportionality • Proportionality VS Utilization Desired solution is not obvious • Depends on several conflicting requirements and properties • Influenced by the end goal

More Related