1 / 13

Warp-speed Open vSwitch: Turbo Charge VNFs to 100Gbps in NextGen SDN/NFV Datacenter

Learn how to boost NFV performance in NextGen data centers using Open vSwitch, OVS Offload options, and Full OVS Offload for maximum throughput with zero CPU consumption. Discover the benefits of Warp-speed Open vSwitch for high-bandwidth, low-latency, and optimized resource usage in SDN/NFV environments.

srobertson
Download Presentation

Warp-speed Open vSwitch: Turbo Charge VNFs to 100Gbps in NextGen SDN/NFV Datacenter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Warp-speed Open vSwitch: Turbo Charge VNFs to 100Gbps in NextGen SDN/NFV Datacenter Openstack Summit November 2017 Mark Iskra Technical Marketing Anita Tragler Product Manager Networking/NFV Ash Bhalgat Sr. Director, Cloud Marketing

  2. Agenda • Requirements for Next-Gen SDN/NFV Data Centers • Boosting NFV performance • Datapath options - OVS, SR-IOV, OVS-DPDK • OVS Offload options – Full and Partial Offloads • Why Full OVS Offload ?,NIC architecture and Packet Flow • Open Source Community Contributions • Benchmark Testing Setup • Performance Results • References

  3. Next-Gen Data Center Needs : NFV, 5G and IoT Which Openstack Network Driver Is Popular? • Scale-up Services Need High Bandwidth • Millions of Mobile Flows (voice, video, data) • 100 Billion 1-10Gbps virtual connections •  High Performance: • 32-64 Cores/Socket, PCIe Gen4 • DDR4, All Flash Arrays (NVMe) • 25G to 100G per Server NIC Port • Low Latency (1ms RTT) • Optimized Resource Usage (Save CapEx) •  Multi-site: Efficient Multi-tenancy w/ SDN Overlay • Integrated End-to-End Solution w/ No Vendor Lock-in Openstack Survey April 2017

  4. Datapath Options Today - 10G Server DPDK VNF with SR-IOV Single-Root IO Virtualization DPDK VNF with Open vSwitch + DPDK VNF with Open vswitch (kernel datapath) VF0 VF1 User space Kernel space Default for Openstack switching, bonding, overlay, live migration Hardware Dependent to the NIC line rate, no CPU overhead ToR for switching DPDK - Direct IO to NIC or vNIC switching, bonding, overlay

  5. OVS Offload – SR-IOV w/ OVS Control Plane Openstack Controller Support for offload of includes • OVS rule match/classification & action • QoS marking, • Overlay tunneling (VXLAN, GRE, QinQ) Most usecases need security groups • Basic - Firewall (stateless), Filtering • Advanced - Connection tracking, NAT Fall back to OVS on host (slow path) [Work In Progress Upstream] Nova Neutron VNF VM VNF VM SDN Controller OVS Control ovsdb-server ovs-vswitchd VF VF ovsdb-server User hypervisor KVM ovs-vswitchd Kernel OVS-bridge (kernel datapath) SR-IOV VFs TC/flower offload NIC PF VF VF OVS eswitch 5 NIC NIC

  6. OVS Offload - Virtio options OVS embedded in NIC No Host OVS Tighter integration testing OVS-DPDK Partial Offload QoS, Security groups, Conntrack, overlay Openstack Controller Openstack Controller Nova VNF VM VNF VM Nova Neutron VNF VM VNF VM Neutron SDN Controller SDN Controller ODL, OVN VF VF OVS Control ovsdb-server ovs-vswitchd User vNIC vNIC ovsdb-server Kernel ovs-vswitchd OVS-DPDK bridge User NIC Partial offload dpdk dpdk OVS Control ovsdb-server ovs-vswitchd ovsdb-server Kernel ovs-vswitchd VF1 VF2 VF1 VF2 NIC OVS eswitch OVS eswitch NIC NIC NIC NIC

  7. ASAP2 Direct: Full OVS Offload Full OVS Offload: Best of Both Worlds !! • Server NICs : 100G is the new 40G, 25G is the new 10G • Accelerate OVS data path with standard OVS control plane (Mellanox ASAP2) - In other words, enable support for most SDN controllers with SR-IOV data plane • OVS Offload better than OVS DPDK: Up to 10x PPS performance with ZERO CPU consumption (Mellanox Lab Results) SR-IOVVF SR-IOVVF PF

  8. Open Source Community Upstream Contributions Openstack Community • OVS ML2 driver to bind to new VIF/port type (SR-IOV + OVS) • Nova new VIF support • Disable OVS in host • TripleO Installer Linux Kernel Community • Representor port • TC (traffic control) and flower offload hooks to NIC • Conntrack Offload OVS Userspace • Flow offload via TC or DPDK • Policy mechanism • Conntrack Offload • OVN flow Offload DPDK community • Flow offload from DPDK (RTE_Flow) • DPDK conntrack offload Currently Available Future

  9. Benchmark Testing Configuration Server Specs: • E5-2667 v3 @ 3.20GHz • Mellanox ConnectX-5 NIC (100Gbps) • RHEL 7.4 Network Configuration: • Switch: Mellanox SN2100(100Gbps) • NIC: Mellanox ConnectX-5 100G • MTU:9k (Underlay), 1500 (Overlay) • SDN: Nuage Virtualized Cloud Services v5.1u1 VxLAN Tunnels between back to back hypervisors

  10. Line Rate Throughput w/ Zero CPU Utilization !! • OVS ASAP2 Achieves ~ Line Rate (94Gbps) for Large Packets VXLAN Tunnels • OVS Virtio Can’t Scale beyond 30Gbps • OVS ASAP2 CPU Utilization is ~50% lower than OVS Virtio (can’t compare beyond 30Gbps) • OVS ASAP2 CPU Utilization at @94G is less than OVS VirtIo @30G Lower is better @ each data rate Testing methodology: • iperfv2 load generation • 12 cpu cores dedicated to testing • Measure difference in CPU utilization • CPU Utilization numbers include iPerf Ping latency with 20Gbps background load • OVS Kernel-Virtio -- 0.110 ms • ASAP2 -- 0.06 ms

  11. Highest PPS Performance w/ Zero CPU Utilization !! • OVS ASAP2 Achieves ~60MPPS for Small Packets VXLAN Tunnels • CPU Utilization – Entire CPU consumption from test bed only 18% • ZERO CPU Utilization for OVS ASAP2 packet processing Testing methodology: • TRex load generation • 6 cpu cores dedicated to TestPMD Flat CPU Consumption Shows: • 18%@56MPPS from test bench • 0% from OVS ASAP2 Packet Processing Ping latency with 20Gbps background load • Virtio -- 0.110 ms • ASAP2 Direct OVS Offload -- 0.06 ms

  12. References • Nuage: www.nuagenetworks.net - Nuage Developer Experience: http://nuagex.io - Nuage Networks ML2 Community: https://github.com/nuagenetworks/nuage-openstack-neutron -Nuage for OSPD: https://github.com/nuagenetworks/nuage-ospdirector • Mellanox: www.mellanox.com - Using SR-IOV offloads with Open-vSwitch and similar applications: https://netdevconf.org/1.2/papers/efraim-gerlitz-sriov-ovs-final.pdf - OVS patch series for all changes including new offloading API: : https://mail.openvswitch.org/pipermail/ovs-dev/2017-April/330606.html - DPDK RTE_Flow API: https://rawgit.com/6WIND/rte_flow/master/rte_flow.pdf • Redhat: www.redhat.com OpenStack blueprints, specs and reviews OVS Offload SmartNIC enablement Openstack blueprint + Spec [Pike] https://review.openstack.org/#/c/275616/ - neutron https://review.openstack.org/#/c/398265/ - nova ​​https://review.openstack.org/#/c/398277/ - os-vif

  13. Thank-you

More Related