1 / 60

pac.c Packet & Circuit Convergence with OpenFlow

http://openflowswitch.org. pac.c Packet & Circuit Convergence with OpenFlow. Saurav Das, Guru Parulkar , & Nick McKeown Stanford University http://www.openflowswitch.org/wk/index.php/PAC.C Ciena India, April 2 nd 2010. Internet has many problems Plenty of evidence and documentation

danno
Download Presentation

pac.c Packet & Circuit Convergence with OpenFlow

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. http://openflowswitch.org pac.cPacket & Circuit Convergence with OpenFlow Saurav Das, Guru Parulkar, & Nick McKeown Stanford University http://www.openflowswitch.org/wk/index.php/PAC.C Ciena India, April 2nd 2010

  2. Internet has many problems Plenty of evidence and documentation Internet’s “root cause problem” It is Closed for Innovations

  3. Million of linesof source code 500M gates 10Gbytes RAM We have lost our way Routing, management, mobility management, access control, VPNs, … App App App 5400 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware Bloated Power Hungry

  4. Hardware Datapath Software Control iBGP, eBGP IPSec Authentication, Security, Access Control Multi layer multi region Firewall Router L3 VPN anycast IPV6 NAT multicast Mobile IP HELLO OSPF-TE HELLO L2 VPN RSVP-TE VLAN MPLS HELLO Many complex functions baked into the infrastructure • OSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … • An industry with a “mainframe-mentality”

  5. Glacial process of innovation made worse by captive standards process Deployment Idea Standardize Wait 10 years • Driven by vendors • Consumers largely locked out • Glacial innovation

  6. Change is happening in non-traditional markets App App App Network Operating System App App App App App App Operating System Specialized Packet Forwarding Hardware Operating System App App App App App App Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware App App App Operating System Specialized Packet Forwarding Hardware

  7. 3. Well-defined open API The “Software-defined Network” 2. At least one good operating system Extensible, possibly open-source App App App 1. Open interface to hardware Network Operating System Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware Simple Packet Forwarding Hardware

  8. Trend App App App App App App Controller 1 Controller 2 Controller 1 Controller 2 NOX (Network OS) Network OS Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Windows (OS) Linux Mac OS Virtualization or “Slicing” Virtualization layer OpenFlow x86 (Computer) Computer Industry Network Industry Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation

  9. Rule (exact & wildcard) Flow 1. Rule (exact & wildcard) Rule (exact & wildcard) Rule (exact & wildcard) Default Action Statistics Statistics Statistics Statistics Action Action Action Flow 2. Flow 3. Flow N. The Flow Abstraction Exploit the flow table in switches, routers, and chipsets e.g. Port, VLAN ID, L2, L3, L4, … e.g. unicast, mcast, map-to-queue, drop Count packets & bytes Expiration time/count

  10. OpenFlow Switching Controller OpenFlow Switch OpenFlow Protocol SSL Secure Channel sw • Add/delete flow entry • Encapsulated packets • Controller discovery Flow Table hw A Flow is any combination of above fields described in the Rule

  11. Flow Example Statistics Statistics Statistics Action Action Action Rule Rule Rule OpenFlow Protocol Routing Controller A Flow is the fundamental unit of manipulation within a switch

  12. Switch Port Switch Port Switch Port MAC src MAC src MAC src MAC dst MAC dst MAC dst Eth type Eth type Eth type VLAN ID VLAN ID VLAN ID IP Src IP Src IP Src IP Dst IP Dst IP Dst IP Prot IP Prot IP Prot TCP sport TCP sport TCP sport TCP dport TCP dport TCP dport Action Action Action OpenFlow is Backward Compatible Ethernet Switching 00:1f:.. * * * * * * * * * port6 IP Routing * * * * * 5.6.7.8 * * * port6 * Application Firewall * * * * * * * * * 22 drop

  13. Switch Port Switch Port Switch Port MAC src MAC src MAC src MAC dst MAC dst MAC dst Eth type Eth type Eth type VLAN ID VLAN ID VLAN ID IP Src IP Src IP Src IP Dst IP Dst IP Dst IP Prot IP Prot IP Prot TCP sport TCP sport TCP sport TCP dport TCP dport TCP dport Action Action Action OpenFlow allows layers to be combined Flow Switching port3 00:2e.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6 00:1f.. VLAN + App port6, port7 * * * * vlan1 * * * * 80 Port + Ethernet + IP port3 00:2e.. 0800 5.6.7.8 4 port 10 * * * * *

  14. A Clean Slate Approach Goal: Put an Open platform in hands of researchers/students to test new ideas at scale Approach: Define OpenFlow feature Work with vendors to add OpenFlow to their switches Deploy on college campus networks Create experimental open-source software - researchers can build on each other’s work

  15. OpenFlow Hardware Juniper MX-series WiFi NEC IP8800 WiMax (NEC) HP Procurve 5400 Cisco Catalyst 6k Ciena CoreDirector Arista 7100 series (Fall 2009) Quanta LB4G (Fall 2009)

  16. OpenFlow Deployments Research and Production Deployments on commercial hardware Juniper, HP, Cisco, NEC, (Quanta), … • Stanford Deployments • Wired: CS Gates building, EE CIS building, EE Packard building (soon) • WiFi: 100 OpenFlow APs across SoE • WiMAX: OpenFlow service in SoE • Other deployments • Internet2 • JGN2plus, Japan • 10-15 research groups have switches

  17. Nationwide OpenFlow Trials UW UnivWisconsin Princeton IndianaUniv Rutgers Stanford NLR Internet2 Clemson GeorgiaTech Production deployments before end of 2010

  18. Motivation GMPLS C C IP & Transport Networks (Carrier’s view) C IP/MPLS D IP/MPLS D D D D D D • are separate networks managed and operated • independently • resulting in duplication of functions and • resources in multiple layers • and significant capex and opex burdens • … well known C C D D IP/MPLS C IP/MPLS D C D C D C D

  19. Motivation • … Convergence is hard • … mainly because the two networks have • very different architecture which makes • integrated operation hard • … and previous attempts at convergence • have assumed that the networks remain the same • … making what goes across them bloated and complicated • and ultimately un-usable We believe true convergence will come about from architectural change!

  20. GMPLS C C C IP/MPLS D IP/MPLS D D D D D D C C D D IP/MPLS C IP/MPLS D UCP C D Flow Network C D C D

  21. pac.c Research Goal: Packet and Circuit Flows Commonly Controlled & Managed Simple, network of Flow Switches Simple, Unified, Automated Control Plane Flow Network … that switch at different granularities: packet, time-slot, lambda & fiber

  22. Switch Port MAC src MAC dst Eth type VLAN ID IP Src IP Dst IP Prot TCP sport TCP dport In Port Out Port Out Lambda In Lambda Starting Time-Slot Starting Time-Slot Action OpenFlow & Circuit Switches Packet Flows Exploit the cross-connect table in circuit switches CircuitFlows VCG 22 VCG 22 Signal Type Signal Type The Flow Abstraction presents a unifying abstraction … blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network

  23. R R S S A A pac.c Example P3 IP 11.12.0.0 + VLAN2, P1 VLAN2 VCG 3 STS192 1 VCG5 P1 VC4 1 VCG3 P2 VC4 4 IP11.13.0.0TCP80 VLAN1025 + VLAN7, P2 + VLAN2, P2 VLAN7 VCG5 P1 VC4 10 OpenFlow (software) OpenFlow (software) IN OUT Packet Switch Fabric Packet Switch Fabric TDM Circuit Switch Fabric VCG3 VCG5 GE ports TDM ports

  24. Unified Architecture Networking Applications App App App App Unified Control Plane NETWORK OPERATING SYSTEM Unifying Abstraction OPENFLOW Protocol Packet Switch Circuit Switch Underlying Data Plane Switching Packet & Circuit Switch

  25. Example Network Services • Static “VLANs” • New routing protocol: unicast, multicast, multipath, load-balancing • Network access control • Mobile VM management • Mobility and handoff management • Energy management • Packet processor (in controller) • IPvX • Network measurement and visualization • …

  26. Network Recovery Converged packets & dynamic circuits opens up new capabilities Congestion Control Routing Traffic Engineering QoS Power Mgmt VPNs Discovery OpenFlow Protocol

  27. Example Application Congestion Control ..via Variable Bandwidth Packet Links

  28. OpenFlow Demo at SC09 • We demonstrated ‘Variable Bandwidth Packet Links’ at SuperComputing 2009 • Joint demo with Ciena Corp. • CienaCoreDirector switches • packet (Ethernet) and circuit switching (SONET TDM) • fabrics and interfaces • native support of OpenFlowfor both switching technologies • Network OS controls both switching fabrics • Network Application establishes • packet & circuit flows • and modifies circuit bandwidth in response to packet flow needs • http://www.openflowswitch.org/wp/2009/11/openflow-demo-at-sc09/

  29. OpenFlow Demo at SC09

  30. OpenFlowTestbed OpenFlow Controller OpenFlow Protocol NetFPGA based OpenFlow packet switch NF2 NF1 to OSA E-O O-E GE 25 km SMF GE AWG 1X9 Wavelength Selective Switch (WSS) to OSA WSS based OpenFlow circuit switch λ1 1553.3 nm GE to DWDM SFP convertor λ2 1554.1 nm 192.168.3.10 192.168.3.12 192.168.3.15 Video Clients Video Server

  31. Lab Demo with Wavelength Switches OpenFlow packet switch OpenFlow packet switch 25 km SMF GE-Optical GE-Optical Mux/Demux Openflow Circuit Switch

  32. pac.c next step:A larger demonstration of capabilities enabled byconverged networks

  33. Demo Goals • The next big demo of OpenFlow capabilities @GEC8 (July 20th) • merge aggregation demo (SIGCOMM’09) with • UCP & dynamic circuits demo (SC’09) • and provide differential treatment to aggregated packet flows • OpenFlow allows for the dynamic definition of flow granularity • enabling packet flow aggregation based on any of the packet headers and without any encapsulation, tagging etc. • enabling circuit flows of varying bandwidths, from 50 Mbps – 40 Gbps • By merging the two, we can demonstrate • L1–L7 control • dynamic & flexible treatment of traffic • best effort packet (over shared static ckt) for apps like http, ftp, smtp • low bandwidth, min propagation delay paths for applications like VoIP • variable bandwidth (BoD) service for applications like streaming video • possible extensions include varying levels of network recovery • re-routing packet flows, protected circuit flows etc.

  34. Demo Topology App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H E T H E T H E T H PKT PKT E T H PKT PKT E T H E T H E T H E T H PKT PKT E T H E T H E T H E T H E T H E T H

  35. Demo Methodology App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H E T H E T H E T H PKT PKT E T H PKT PKT E T H E T H E T H E T H PKT PKT E T H E T H E T H E T H E T H E T H

  36. Step 1: Aggregation into Fixed Circuits App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H E T H E T H Aggregation E T H PKT E T H PKT E T H PKT PKT PKT E T H PKT E T H E T H E T H E T H E T H E T H E T H E T H Into static ckts … for best-effort traffic: http, smtp, ftp etc.

  37. Step 2: Aggregation into Dynamic Circuits App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H E T H E T H Streaming video flow E T H PKT E T H PKT E T H PKT PKT PKT E T H PKT E T H E T H E T H E T H E T H E T H E T H E T H Initially muxed into static ckts Increasing streaming video traffic

  38. Step 2: Aggregation into Dynamic Circuits App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H E T H E T H ..leads to video flows being aggregated E T H PKT E T H PKT E T H PKT PKT PKT E T H PKT E T H E T H E T H E T H E T H E T H E T H E T H ..& packed into a dynamically created circuit ..that bypasses intermediate packet switch

  39. Step 2: Aggregation into Dynamic Circuits App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H E T H E T H .. results in dynamic increase of circuit bandwidth PKT PKT E T H E T H E T H PKT E T H PKT E T H PKT E T H PKT E T H E T H E T H E T H E T H E T H .. even greater increase in video traffic

  40. Step 3: Fine-grained control App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H .. aggregated over dynamic low-b/w circuit with min propagation delay E T H E T H PKT PKT E T H E T H E T H E T H PKT PKT E T H PKT PKT E T H E T H E T H E T H E T H E T H E T H .. VoIP flows

  41. Step 3: Fine-grained control App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H E T H E T H .. removal of dynamic circuit PKT PKT E T H E T H E T H PKT E T H PKT E T H PKT E T H PKT E T H E T H E T H E T H E T H E T H .. decreasing video traffic

  42. Step 4: Network Recovery App App App App NETWORK OPERATING SYSTEM P K T P K T P K T T D M T D M T D M S O N E T S O N E T S O N E T E T H E T H E T H PKT E T H PKT E T H E T H PKT PKT PKT PKT E T H E T H E T H E T H E T H E T H E T H E T H E T H • Circuit flow recovery, via • previously allocated backup circuit (protection) or • dynamically created circuit (restoration) Packet flow recovery via rerouting

  43. Demo References Aggregationhttp://openflow.smugmug.com/OpenFlow-Videos/Aggregation-Demo/9651006_JGGzo#651126002_QybPc-L-LB Packet and Circuit Convergence http://www.openflowswitch.org/wk/index.php/PAC.C

  44. pac.c business models

  45. Demo Motivation • It is well known that Transport Service Providers dislike giving up manual control of their networks • to an automated control plane • no matter how intelligent that control plane may be • how to convince them? • It is also well known that converged operation of packet & circuit networks is a good idea • for those that own both types of networks – eg AT&T, Verizon • BUT what about those who own only packet networks –eg Google • they do not wish to buy circuit switches • how to convince them? • We believe the answer to both lies in virtualization • (or slicing)

  46. Demo Goals • The 3rd big demo of OpenFlow capabilities with circuit switches • potentially targeted for SuperComputing 2010 (November15th) • Goal# 1: To demonstrate OpenFlow as a unified virtualization platform for packet and circuit switches. • Goal# 2: To demonstrate a deployment scenario for converged packet and circuit networks, owned by different service providers. • essentially a technical/business model • which TSPs can be comfortable with • and which ISPs can buy into

  47. Basic Idea: Unified Virtualization C C OpenFlow Protocol C FLOWVISOR OpenFlow Protocol CK P CK CK P CK CK P P

  48. Deployment Scenario: Different SPs ISP ‘A’ Client Controller Private Line Client Controller ISP ‘B’ Client Controller C C OpenFlow Protocol C FLOWVISOR Under Transport Service Provider (TSP) control OpenFlow Protocol D CK P D CK D Single Physical Infrastructure of Packet & Circuit Switches D CK Isolated Client Network Slices D D D P D D CK CK D D P D D D P D D D D D D D D D D D D D

  49. Demo Topology App App App App App App TSP’s NMS/EMS ISP# 1’s NetOS ISP# 2’s NetOS P K T P K T T D M T D M P K T T D M S O N E T S O N E T S O N E T E T H E T H E T H PKT E T H PKT PKT E T H PKT PKT E T H E T H PKT E T H E T H E T H E T H E T H E T H E T H E T H TSP’s FlowVisor Transport Service Provider’s (TSP) virtualized network Internet Service Provider’s (ISP# 1) OF enabled network with slice of TSP’s network Internet Service Provider’s (ISP# 2) OF enabled network with another slice of TSP’s network TSP’s private line customer

  50. Demo Methodology • We will show: • TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS. • The FlowVisorwill manage slices of the TSP’s network for ISP customers, where { slice = bandwidth + control of part of TSP’s switches } • NMS/EMS can be used to manually provision circuits for Private Line customers • Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customer’s slices. • ISP#1 is free to do whatever it wishes within its slice • eg. use an automated control plane (like OpenFlow) • bring up and tear-down links as dynamically as it wants • ISP#2 is free to do the same within its slice • Neither can control anything outside its slice, nor interfere with other slices • TSP can still use NMS/EMS for the rest of its network

More Related