1 / 77

OpenFlow in Service Provider Networks AT&T Tech Talks October 2010

Rob Sherwood Saurav Das Yiannis Yiakoumis. OpenFlow in Service Provider Networks AT&T Tech Talks October 2010. Talk Overview. Motivation What is OpenFlow Deployments OpenFlow in the WAN Combined Circuit/Packet Switching Demo Future Directions. Million of lines of source code.

quasar
Download Presentation

OpenFlow in Service Provider Networks AT&T Tech Talks October 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rob Sherwood Saurav Das Yiannis Yiakoumis OpenFlow in Service Provider NetworksAT&T Tech TalksOctober 2010

  2. Talk Overview • Motivation • What is OpenFlow • Deployments • OpenFlow in the WAN • Combined Circuit/Packet Switching • Demo • Future Directions

  3. Million of linesof source code 500M gates 10Gbytes RAM We have lost our way Routing, management, mobility management, access control, VPNs, … App App App 5400 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware Bloated Power Hungry

  4. Hardware Datapath Software Control iBGP, eBGP IPSec Authentication, Security, Access Control Multi layer multi region Firewall Router L3 VPN anycast IPV6 NAT multicast Mobile IP HELLO OSPF-TE HELLO L2 VPN RSVP-TE VLAN MPLS HELLO • Many complex functions baked into the infrastructure • OSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … • An industry with a “mainframe-mentality”

  5. Glacial process of innovation made worse by captive standards process Deployment Idea Standardize Wait 10 years • Driven by vendors • Consumers largely locked out • Glacial innovation

  6. New Generation Providers Already Buy into It In a nutshell Driven by cost and control Started in data centers…. What New Generation Providers have been Doing Within the Datacenters Buy bare metal switches/routers Write their own control/management applications on a common platform 6

  7. Change is happening in non-traditional markets App App App Network Operating System App App App App App App Operating System Specialized Packet Forwarding Hardware Operating System App App App App App App Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware App App App Operating System Specialized Packet Forwarding Hardware

  8. 3. Well-defined open API The “Software-defined Network” 2. At least one good operating system Extensible, possibly open-source App App App 1. Open interface to hardware Network Operating System Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware

  9. Trend App App App Linux Mac OS Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Windows (OS) Virtualization layer x86 (Computer) App App App Controller 1 Controller 2 Controller 1 Controller 2 NOX (Network OS) Network OS Virtualization or “Slicing” OpenFlow Computer Industry Network Industry Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation

  10. What is OpenFlow?

  11. Short Story: OpenFlow is an API • Control how packets are forwarded • Implementable on COTS hardware • Make deployed networks programmable • not just configurable • Makes innovation easier • Result: • Increased control: custom forwarding • Reduced cost: API  increased competition

  12. Ethernet Switch/Router

  13. Control Path Control Path (Software) Data Path (Hardware)

  14. OpenFlow Controller OpenFlow Protocol (SSL/TCP) Control Path OpenFlow Data Path (Hardware)

  15. MAC src MAC dst IP Src IP Dst TCP sport TCP dport * * * 5.6.7.8 * * port 1 Action OpenFlow Flow Table Abstraction Controller PC OpenFlow Firmware Software Layer Flow Table Hardware Layer port 2 port 1 port 3 port 4 5.6.7.8 1.2.3.4

  16. OpenFlow BasicsFlow Table Entries Rule Action Stats Packet + byte counters • Forward packet to port(s) • Encapsulate and forward to controller • Drop packet • Send to normal processing pipeline • Modify Fields Eth type Switch Port IP Src IP Dst IP Prot TCP sport TCP dport VLAN ID MAC src MAC dst + mask what fields to match

  17. Examples Switch Port Switch Port Switch Port MAC src MAC src MAC src MAC dst MAC dst MAC dst Eth type Eth type Eth type VLAN ID VLAN ID VLAN ID IP Src IP Src IP Src IP Dst IP Dst IP Dst IP Prot IP Prot IP Prot TCP sport TCP sport TCP sport TCP dport TCP dport TCP dport Forward Action Action Switching 00:1f:.. * * * * * * * * * port6 Flow Switching port3 00:20.. 00:1f.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6 Firewall * * * * * * * * * 22 drop

  18. Examples Switch Port Switch Port MAC src MAC src MAC dst MAC dst Eth type Eth type VLAN ID VLAN ID IP Src IP Src IP Dst IP Dst IP Prot IP Prot TCP sport TCP sport TCP dport TCP dport Action Action Routing * * * * * * 5.6.7.8 * * * port6 VLAN Switching port6, port7, port9 vlan1 00:1f.. * * * * * * * *

  19. OpenFlow UsageDedicated OpenFlow Network Statistics Statistics Statistics Action Action Action Rule Rule Rule Aaron’s code OpenFlow Protocol Controller PC OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlowSwitch.org

  20. Network Design Decisions Forwarding logic (of course) Centralized vs. distributed control Fine vs. coarse grained rules Reactive vs. Proactive rule creation Likely more: open research area

  21. Centralized vs Distributed Control Centralized Control OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch Distributed Control Controller Controller Controller Controller

  22. Flow Routing vs. AggregationBoth models are possible with OpenFlow Flow-Based Every flow is individually set up by controller Exact-match flow entries Flow table contains one entry per flow Good for fine grain control, e.g. campus networks Aggregated One flow entry covers large groups of flows Wildcard flow entries Flow table contains one entry per category of flows Good for large number of flows, e.g. backbone

  23. Reactive vs. Proactive Both models are possible with OpenFlow Reactive First packet of flow triggers controller to insert flow entries Efficient use of flow table Every flow incurs small additional flow setup time If control connection lost, switch has limited utility Proactive Controller pre-populates flow table in switch Zero additional flow setup time Loss of control connection does not disrupt traffic Essentially requires aggregated (wildcard) rules

  24. OpenFlow Application: Network Slicing • Divide the production network into logical slices • each slice/service controls its own packet forwarding • users pick which slice controls their traffic: opt-in • existing production services run in their own slice • e.g., Spanning tree, OSPF/BGP • Enforce strong isolation between slices • actions in one slice do not affect another • Allows the (logical) testbed to mirror the production network • real hardware, performance, topologies, scale, users • Prototype implementation: FlowVisor

  25. Slice 2 Controller Slice 3 Controller Slice Policies Add a Slicing Layer Between Planes Slice 1 Controller Control/Data Protocol Rules Excepts Data Plane

  26. Network Slicing Architecture • A network slice is a collection of sliced switches/routers • Data plane is unmodified • Packets forwarded with no performance penalty • Slicing with existing ASIC • Transparent slicing layer • each slice believes it owns the data path • enforces isolation between slices • i.e., rewrites, drops rules to adhere to slice police • forwards exceptions to correct slice(s)

  27. Slicing Policies • The policy specifies resource limits for each slice: • Link bandwidth • Maximum number of forwarding rules • Topology • Fraction of switch/router CPU • FlowSpace: which packets does the slice control?

  28. FlowSpace: Maps Packets to Slices

  29. Real User Traffic: Opt-In • Allow users to Opt-In to services in real-time • Users can delegate control of individual flows to Slices • Add new FlowSpace to each slice's policy • Example: • "Slice 1 will handle my HTTP traffic" • "Slice 2 will handle my VoIP traffic" • "Slice 3 will handle everything else" • Creates incentives for building high-quality services

  30. OpenFlow Controller OpenFlow Controller OpenFlow Controller OpenFlow FlowVisor OpenFlow OpenFlow Firmware Data Path Switch/ Router Switch/ Router FlowVisor Implemented on OpenFlow Server Servers Custom Control Plane OpenFlow Controller Network OpenFlow Protocol Stub Control Plane OpenFlow Firmware Data Plane Data Path Switch/ Router Switch/ Router

  31. Alice Controller Bob Controller Cathy Controller OpenFlow FlowVisor OpenFlow OpenFlow Firmware Data Path FlowVisor Message Handling Rule Policy Check: Is this rule allowed? Policy Check: Who controls this packet? Full Line Rate Forwarding Exception Packet Packet

  32. OpenFlow Deployments

  33. OpenFlow has been prototyped on…. Most (all?) hardware switches now based on Open vSwitch… • Ethernet switches • HP, Cisco, NEC, Quanta, + more underway • IP routers • Cisco, Juniper, NEC • Switching chips • Broadcom, Marvell • Transport switches • Ciena, Fujitsu • WiFi APs and WiMAX Basestations

  34. Deployment: Stanford • Our real, production network • 15 switches, 35 APs • 25+ users • 1+ year of use • my personal email and web-traffic! • Same physical network hosts Stanford demos • 7 different demos

  35. Demo Infrastructure with Slicing

  36. Deployments: GENI

  37. (Public) Industry Interest • Google has been a main proponent of new OpenFlow 1.1 WAN features • ECMP, MPLS-label matching • MPLS LDP-OpenFlow speaking router: NANOG50 • NEC has announced commercial products • Initially for datacenters, talking to providers • Ericsson • “MPLS Openflow and the Split Router Architecture: A Research Approach“ at MPLS2010

  38. OpenFlow in the WAN

  39. CAPEX: 30-40% OPEX: 60-70% … and yet service providers own & operate 2 such networks : IP and Transport

  40. Motivation GMPLS C C IP & Transport Networks are separate C IP/MPLS D IP/MPLS D D D D D D • managed and operated independently • resulting in duplication of functions and • resources in multiple layers • and significant capex and opex burdens • … well known C C D D IP/MPLS C IP/MPLS D C D C D C D

  41. Motivation GMPLS C C IP & Transport Networks do not interact C IP/MPLS D IP/MPLS D D D D D D • IP links are static • and supported by static circuits or lambdas in the Transport network C C D D IP/MPLS C IP/MPLS D C D C D C D

  42. What does it mean for the IP network? IP IP backbone network design • Router connections hardwired by lambdas • 4X to 10X over-provisioned • Peak traffic • protection DWDM • Big Problem • More over-provisioned links • Bigger Routers How is this scalable?? *April, 02

  43. Bigger Routers? • Dependence on large Backbone Routers • Expensive • Power Hungry Juniper TX8/T640 TX8 Cisco CRS-1 How is this scalable??

  44. Functionality Issues! • Dependence on large Backbone Routers • Complex & Unreliable Network World 05/16/2007 • Dependence on packet-switching • Traffic-mix tipping heavily towards video • Questionable if per-hop packet-by-packet processing is a good idea • Dependence on over-provisioned links • Over-provisioning masks  packet switching simply not very good at providing bandwidth, delay, jitter and loss guarantees

  45. How can Optics help? • Optical Switches • 10X more capacity per unit volume (Gb/s/m3) • 10X less power consumption • 10X less cost per unit capacity (Gb/s) • Five 9’s availability • Dynamic Circuit Switching • Recover faster from failures • Guaranteed bandwidth & Bandwidth-on-demand • Good for video flows • Guaranteed low latency & jitter-free paths • Help meet SLAs – lower need for over-provisioned IP links

  46. Motivation GMPLS C C IP & Transport Networks do not interact C IP/MPLS D IP/MPLS D D D D D D • IP links are static • and supported by static circuits or lambdas in the Transport network C C D D IP/MPLS C IP/MPLS D C D C D C D

  47. What does it mean for the Transport network? IP • Without interaction with a higher layer • there is really no need to support dynamic services • and thus no need for an automated control plane • and so the Transport network remains manually controlled via NMS/EMS • and circuits to support a service take days to provision DWDM • Without visibility into higher layer services • the Transport network reduces to a bandwidth-seller • The Internet can help… • wide variety of services • different requirements that can take advantage of dynamic circuit characteristics *April, 02

  48. What is needed • … Converged Packet and Circuit Networks • manage and operate commonly • benefit from both packet and circuit switches • benefit from dynamic interaction between packet switching and dynamic-circuit-switching • … Requires • a common way to control • a common way to use

  49. But • … Convergence is hard • … mainly because the two networks have • very different architecture which makes • integrated operation hard • … and previous attempts at convergence • have assumed that the networks remain the same • … making what goes across them bloated and complicated • and ultimately un-usable We believe true convergence will come about from architectural change!

  50. GMPLS C C C IP/MPLS D IP/MPLS D D D D D D C C D D IP/MPLS C IP/MPLS D UCP C D Flow Network C D C D

More Related