1 / 21

Virtual Layer 2: A Scalable and Flexible Data-Center Network

Virtual Layer 2: A Scalable and Flexible Data-Center Network. Microsoft Research. Changhoon Kim. Work with Albert Greenberg, James R. Hamilton, Navendu Jain, Srikanth Kandula , Parantap Lahiri , David A. Maltz, Parveen Patel, and Sudipta Sengupta. Tenets of Cloud-Service Data Center.

denali
Download Presentation

Virtual Layer 2: A Scalable and Flexible Data-Center Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Layer 2:A Scalable and FlexibleData-Center Network Microsoft Research Changhoon Kim Work with Albert Greenberg,James R. Hamilton, Navendu Jain,SrikanthKandula, ParantapLahiri,David A. Maltz, Parveen Patel,and SudiptaSengupta

  2. Tenets of Cloud-Service Data Center • Agility: Assign any servers to any services • Boosts cloud utilization • Scaling out:Use large pools of commodities • Achieves reliability, performance, low cost

  3. What is VL2? • Why is agility important? • Today’s DC network inhibits the deployment of other technical advances toward agility • With VL2, cloud DCs can enjoy agility in full The first DC network that enables agility in a scaled-out fashion

  4. Status Quo: Conventional DC Network Internet CR CR DC-Layer 3 . . . AR AR AR AR DC-Layer 2 • Key • CR = Core Router (L3) • AR = Access Router (L3) • S = Ethernet Switch (L2) • A = Rack of app. servers S S . . . S S S S … … A A A A A A ~ 1,000 servers/pod == IP subnet Reference – “Data Center: Load balancing Data Center Services”, Cisco 2004

  5. Conventional DC Network Problems • Dependence on high-cost proprietary routers • Extremely limited server-to-server capacity CR CR ~ 200:1 AR AR AR AR S S S S ~ 40:1 . . . S S S S S S S S ~ 5:1 … … A A A A A A … … A A A A A A

  6. And More Problems … CR CR ~ 200:1 AR AR AR AR S S S S S S S S S S S S A … A … A A A A A … … A A A A A A IP subnet (VLAN) #2 IP subnet (VLAN) #1 • Resource fragmentation, significantly lowering cloud utilization (and cost-efficiency)

  7. And More Problems … CR CR ~ 200:1 AR AR AR AR Complicated manual L2/L3 re-configuration S S S S S S S S S S S S A … A … A A A A A … … A A A A A A IP subnet (VLAN) #2 IP subnet (VLAN) #1 • Resource fragmentation, significantly lowering cloud utilization (and cost-efficiency)

  8. And More Problems … CR CR AR AR AR AR S S S S S S S S S S S S … A … A A A A A … … A A A A A A Revenue lost Expense wasted • Resource fragmentation, significantly lowering cloud utilization (and cost-efficiency)

  9. Know Your Cloud DC: Challenges • Instrumented a large cluster used for data mining and identified distinctive traffic patterns • Traffic patterns are highly volatile • A large number of distinctive patterns even in a day • Traffic patterns are unpredictable • Correlation between patterns very weak Optimization should be done frequently and rapidly

  10. Know Your Cloud DC: Opportunities • DC controller knows everything about hosts • Host OS’s are easily customizable • Probabilistic flow distribution would work well enough, because … • Flows are numerous and not huge – no elephants! • Commodity switch-to-switch links are substantially thicker (~ 10x) than the maximum thickness of a flow DC network can be made simple

  11. All We Need is Just a Huge L2 Switch,or an Abstraction of One CR CR AR AR AR AR . . . S S S S S S S S S S S S . . . A A A … … A A A A A A A A A A A A A A A A A A A A … … A A A A A A A A A A A A A A

  12. All We Need is Just a Huge L2 Switch,or an Abstraction of One The Illusion of a Huge L2 Switch 1. L2 semantics 2. Uniform high capacity 3. Performance isolation A A A … … A A A A A A A A A A A A A A A A A A A A … … A A A A A A A A A A A A A A

  13. Specific Objectives and Solutions Objective Approach Solution Name-location separation & resolution service Employ flat addressing 1. Layer-2 semantics Flow-based random traffic indirection(Valiant LB) Guarantee bandwidth forhose-model traffic 2. Uniformhigh capacity between servers Enforce hose model using existing mechanisms only 3. Performance Isolation TCP

  14. Addressing and Routing:Name-Location Separation Cope with host churns with very little overhead VL2 Switches run link-state routing and maintain only switch-level topology DirectoryService … x  ToR2 y  ToR3 z  ToR4 … … x  ToR2 y  ToR3 z  ToR3 … . . . . . . . . . ToR1 ToR2 ToR3 ToR4 ToR3 y payload Lookup & Response y, z y z x ToR3 ToR4 z z payload payload Servers use flat names

  15. Addressing and Routing:Name-Location Separation Cope with host churns with very little overhead VL2 Switches run link-state routing and maintain only switch-level topology DirectoryService • Allows to use low-cost switches • Protects network and hosts from host-state churn • Obviates host and switch reconfiguration … x  ToR2 y  ToR3 z  ToR4 … … x  ToR2 y  ToR3 z  ToR3 … . . . . . . . . . ToR1 ToR2 ToR3 ToR4 ToR3 y payload Lookup & Response y, z y z x ToR3 ToR4 z z payload payload Servers use flat names

  16. Example Topology: Clos Network Offer huge aggr capacity and multi paths at modest cost VL2 . . . Int . . . Aggr Kaggr switches with D ports . . . . . . . . . TOR . . . . . . . . . . . 20 Servers 20*(DK/4) Servers

  17. Example Topology: Clos Network Offer huge aggr capacity and multi paths at modest cost VL2 . . . Int . . . Aggr Kaggr switches with D ports . . . . . . . . . TOR . . . . . . . . . . . 20 Servers 20*(DK/4) Servers

  18. Traffic Forwarding: Random Indirection Cope with arbitrary TMs with very little overhead IANY IANY IANY Links used for up paths Links usedfor down paths T1 T2 T3 T4 T5 T6 IANY T5 T3 z y payload payload x y z

  19. Traffic Forwarding: Random Indirection Cope with arbitrary TMs with very little overhead IANY IANY IANY Links used for up paths • [ ECMP + IP Anycast ] • Harness huge bisection bandwidth • Obviate esoteric traffic engineering or optimization • Ensure robustness to failures • Work with switch mechanisms available today Links usedfor down paths T1 T2 T3 T4 T5 T6 IANY T5 T3 y z payload payload x y z

  20. Does VL2 Ensure Uniform High Capacity? • How “high” and “uniform” can it get? • Performed all-to-all data shuffle tests, then measuredaggregate and per-flow goodput • The cost for flow-based random spreading 1.00 0.96 0.92 0.88 0.84 0.80 94% 0.995 §Jain’s fairness index defined as (∑xi)2/(n∙∑xi2) Fairness Index§ Fairness of Aggr-to-Int links’ utilization 0 100 200 300 400 500 Time (s)

  21. VL2 Conclusion • VL2 achieves agility at scalevia • L2 semantics • Uniform high capacity between servers • Performance isolation between services • Lessons • Randomization can tame volatility • Add functionality where you have control • There’s no need to wait!

More Related