Zeppelin a third generation data center network virtualization technology based on sdn and mpls
This presentation is the property of its rightful owner.
Sponsored Links
1 / 22

Zeppelin - A Third Generation Data Center Network Virtualization Technology based on SDN and MPLS PowerPoint PPT Presentation


  • 207 Views
  • Uploaded on
  • Presentation posted in: General

Zeppelin - A Third Generation Data Center Network Virtualization Technology based on SDN and MPLS. James Kempf , Ying Zhang, Ramesh Mishra, Neda Beheshti Ericsson Research. Outline. Motivation Our approach Design choices Unicast Routing Basic Data Plane Label Based Fowarding

Download Presentation

Zeppelin - A Third Generation Data Center Network Virtualization Technology based on SDN and MPLS

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Zeppelin a third generation data center network virtualization technology based on sdn and mpls

Zeppelin - A Third Generation Data Center Network Virtualization Technology based on SDN and MPLS

James Kempf, Ying Zhang, Ramesh Mishra, Neda Beheshti

Ericsson Research


Outline

Outline

  • Motivation

  • Our approach

  • Design choices

  • Unicast Routing

    • Basic Data Plane Label Based Fowarding

    • Example Control Plane: VM Activation

  • Evaluation

  • Conclusions


Motivation drawbacks in existing network virtualization technology

Motivation: Drawbacks in Existing Network virtualization Technology

  • Lack of performance guarantees

    • No QoS or traffic engineering

  • Coupling with wide area network is weak

    • Cumbersome gateway/tunnel endpoints required

  • Efficient traffic isolation techniques are needed

    • Performance isolation

    • Disruptions minimization

    • DoS attack prevention

  • Existing solutions are insufficient or proprietary

    • VLANs, MAC address tunnels, and IP overlays are difficult to scale

    • IP overlay based approaches are difficult to manage

    • Proprietary versions of TRILL and MAC address tunneling

    • Flowvisor approach makes debugging difficult and requires the OpenFlow controller to handle multicast and broadcast


Our approach

Our approach

  • Zeppelin: an MPLS based network virtualization technology

    • SDN controller manages MPLS-based forwarding elements

      • Simple OpenFlow control plane

    • Labels for tenant and last hop link are used for forwarding between TORS and VMs

    • Hierarchical labels to improve scalability

      • Virtual network labels

      • Aggregation network labels


Design choices mpls review

Design Choices: MPLS Review

  • Existing applications of MPLS mostly in carrier networks

    • Transport networks (MPLS-TP), L2VPN, EVP, L3VPN, traffic engineering

  • 24 bit labels specify next hop

  • Extremely simple data plane:

    • Push: push 24 bit label on top of stack

    • Pop: pop top label

    • Swap: Swap top label with next

  • Horrendously complex control plane

    • Historically constrained by linking with IP

    • BGP, LDP, RSVP-TE, Netconf/YANG, etc., etc.

    • Every new MPLS application results in a new control plane

Fundamentally, MPLS addresses links, while IP addresses nodes


Design choices why mpls

Design choices: Why MPLS?

  • Simple data plane, simple control plane

    • Replace control plane with OpenFlow

  • Available in most switches

    • New low cost OpenFlow enabled switches support it (Pica8,Centec)

    • And most moderate cost Ethernet switches do too

  • Widely used in wide area network VPNs

    • Simplifies gateway between WAN VPN and DC VPN


Design choices why not mpls

Design choices: Why NOT mpls?

  • Ethernet is entrenched everywhere and in the data center in particular

    • Low cost hardware, management software

    • Merchant chip OpenFlow hardware has constrained flow scalability

      • TCAM is costly and consumes power

    • Network processors have good flow scalability but may not be cost competitive

  • IP overlay techniques like GRE/VXLAN are gaining favor

    • Only require changing software

      • Virtual switch at hypervisor and gateway to WAN

    • Easily managable IP routed network underlay for aggregation

      • Lots of tooling, sysadmin expertise in IP network management

    • Easy to switch out underlay network


Unicast routing data plane label based forwarding

SourceTORS

DestinationTORS

Aggregation

Aggregation

Aggregation

Aggregation

Core

Core

10.22.30.2

R1RM L-1

Ln Label

GT Label

GT ENet

Ln Label

Ln Label

GT Label

GT Label

GT ENet

BT ENet

GT ENet

Source Virtual Switch

Dest Virtual Switch

Rack 1

Green Tenant VM

Green Tenant VM

Blue Tenant VM

Blue Tenant VM

Unicast ROUTING: Data Plane label based forwarding

R1Rm LSP-2

R1Rm LSP-1

10.22.30.2

Pop Inter-TORSLabel

Push Inter-TORSLabel

R1RM L-2

Ln Label

BT Label

L1

BT ENet

Ln

10.22.30.2

10.22.30.3

10.22.30.2

10.22.30.2

Ln Label

Ln Label

BT Label

BT Label

BT ENet

Rack m

10.22.30.3

10.22.30.2

10.22.30.2

10.22.30.2

GT ENet

BT ENet

GT ENet

BT ENet

Pop Link and Tenant Label

Push Tenant Labels

Push Dest. Link Labels


Unicast routing example control plane vm activation

Unicast Routing: EXAMPLE Control Plane: VM Activation

Cloud Execution Manager

Cloud NetworkManager

Server Virtual Switch

New Green Tenant VM:<GT ID, GT MAC, Srvr MAC>

Send OpenFlowFlowMod to VS on Srvrcausing tenant and link label on incomingpackets to pop and forward packet toTenant VM

Inform CloudNetwork Manager about new VM

Look up VS-TORS link label and TenantLabel using Tenant MAC and ID as key

Record new tenantVM addresses in Cloud NetworkMapping Table

  • OpenFlow FlowMod


Evaluation implementation

EVALUATION: implementation

  • Use Mininet to emulate the data center network

  • Implement the control plane on NOX controller

  • Modify Mininet to store node metadata for switches and hosts


Evaluation simulation

Evaluation: SIMULATION

  • Metric was average number of rules per VS and TORS

  • Simulation parameters

    • 12 racks

    • 20 servers per rack

    • Random number of VMs to connect

    • Average 5 and 10 connections per VM

  • Results show good scalability

    • 5 session average within current gen TORS flow table scale

    • 10 session average within next gen TORS flow table scale

  • Difference from other OpenFlow network virtualization schemes

    • As number of flows per VM increases, TORS rules get reused

    • Existing switch MPLS support can be used to move flow table rules out of TCAM


Conclusion

conclusion

  • Presented the design and implementation of Zeppelin, a third generation data center virtualization scheme

  • Zeppelin uses two levels of MPLS labels

    • The destination link location and tenant network

    • The routing path in the aggregation network

  • Future work

    • Extend Zeppelin to multicast and couple with existing WAN MPLS VPNs

    • Implement on actual OpenFlow hardware

    • Study actual data center traffic


Zeppelin a third generation data center network virtualization technology based on sdn and mpls

Back up slides


Zeppelin a third generation data center network virtualization technology based on sdn and mpls

Changes in cloud operating system

MACServer-T1-VM

LabelTORS1

SMVL Table

TID1

Labeltid1

TITL Table

IPT1-VM

TID1

MACT1-VM

MACServer-T1-VM

TORS1

LabelTORS1

CNM Table

TLVLL Table


Zeppelin a third generation data center network virtualization technology based on sdn and mpls

Push BBLabel-6, Forward Port1

LkGroup

HashHeader

Push BBLabel-1, Forward Port1

LkGroup

HashHeader

Push BBLabel-4, Forward Port1

Li Group

HashHeader

MACT-VM

Lj

Ln

Lk

Other fields

Other fields

Other fields

MACServer

IPT-VM

Push BBLabel-7, Forward Port2

Push BBLabel-2, Forward Port2

Push BBLabel-5, Forward Port2

TORS Flow Table Rules for Packets Outgoing from the Rack

Sent to Lk Group

Sent to Li Group

Sent to Ln Group


Control plane messages for vm ip address configuration

Control Plane messages for: VM IP Address Configuration

Cloud NetworkManager

ServerVirtual Switch

Green Tenant VM

Find IP Address(DHCP Relay or Server)

DHCP Request

DHCP Request (Fwd)

  • OpenFlow FlowMod

DHCP Reply


Control plane messages for destination ip and mac discovery

Control Plane messages for: Destination IP and MAC Discovery

Dest. VirtualSwitch

Source/Dest. TORS

Cloud NetworkManager

Green Tenant VM

Source ServerVirtual Switch

ARP Request: GT Dest. IP

ARP Request (Fwd)

OpenFlow FlowMod

See Figure 7 andtext for Source and Dest.TORS

and Dest. VirtualSwitch FlowMods

OpenFlow FlowMods

ARP Reply: GT DMAC


Zeppelin a third generation data center network virtualization technology based on sdn and mpls

Control Plane messages for: VM movement

CNMMapping Table

Cloud Network Manager

Cloud Execution Manager

Packet Buffer

Source VS Flow Table

Destination VS Flow Table

Hyper-visor

Hyper-visor

Virtual Switch

Virtual Switch

 (data plane)

GT-VM

Blade + NIC HW

Blade + NIC HW

VM

GT-VM

VM

GT-VM

Source Server

Destination Server


Inter tors lsp configuration

Inter-TORS LSP Configuration

  • When data center boots up or a new rack is added, each TORS is configured with labels for links in the rack in Table 2

  • Rule: Match label against labels for rackAction: Forward on matched link to server

  • Only configure TORS for tunnels into rack

    • Number of table entries for servers in rack is limited


Zeppelin a third generation data center network virtualization technology based on sdn and mpls

TORS

TORS

TORS

TORS


  • Login