Rob sherwood saurav das yiannis yiakoumis
This presentation is the property of its rightful owner.
Sponsored Links
1 / 77

OpenFlow in Service Provider Networks AT&T Tech Talks October 2010 PowerPoint PPT Presentation


  • 104 Views
  • Uploaded on
  • Presentation posted in: General

Rob Sherwood Saurav Das Yiannis Yiakoumis. OpenFlow in Service Provider Networks AT&T Tech Talks October 2010. Talk Overview. Motivation What is OpenFlow Deployments OpenFlow in the WAN Combined Circuit/Packet Switching Demo Future Directions. Million of lines of source code.

Download Presentation

OpenFlow in Service Provider Networks AT&T Tech Talks October 2010

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Rob sherwood saurav das yiannis yiakoumis

Rob Sherwood

Saurav Das

Yiannis Yiakoumis

OpenFlow in Service Provider NetworksAT&T Tech TalksOctober 2010


Talk overview

Talk Overview

  • Motivation

  • What is OpenFlow

  • Deployments

  • OpenFlow in the WAN

    • Combined Circuit/Packet Switching

    • Demo

  • Future Directions


We have lost our way

Million of linesof source code

500M gates

10Gbytes RAM

We have lost our way

Routing, management, mobility management, access control, VPNs, …

App

App

App

5400 RFCs

Barrier to entry

Operating

System

Specialized Packet Forwarding Hardware

Bloated

Power Hungry


Openflow in service provider networks at t tech talks october 2010

Hardware

Datapath

Software

Control

iBGP, eBGP

IPSec

Authentication, Security, Access Control

Multi layer multi region

Firewall

Router

L3 VPN

anycast

IPV6

NAT

multicast

Mobile IP

HELLO

OSPF-TE

HELLO

L2 VPN

RSVP-TE

VLAN

MPLS

HELLO

  • Many complex functions baked into the infrastructure

    • OSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, …

  • An industry with a “mainframe-mentality”


  • Glacial process of innovation made worse by captive standards process

    Glacial process of innovation made worse by captive standards process

    Deployment

    Idea

    Standardize

    Wait 10 years

    • Driven by vendors

    • Consumers largely locked out

    • Glacial innovation


    New generation providers already buy into it

    New Generation Providers Already Buy into It

    In a nutshell

    Driven by cost and control

    Started in data centers….

    What New Generation Providers have been Doing Within the Datacenters

    Buy bare metal switches/routers

    Write their own control/management applications on a common platform

    6


    Change is happening in non traditional markets

    Change is happening in non-traditional markets

    App

    App

    App

    Network Operating System

    App

    App

    App

    App

    App

    App

    Operating

    System

    Specialized Packet Forwarding Hardware

    Operating

    System

    App

    App

    App

    App

    App

    App

    Specialized Packet Forwarding Hardware

    Operating

    System

    Specialized Packet Forwarding Hardware

    Operating

    System

    Specialized Packet Forwarding Hardware

    App

    App

    App

    Operating

    System

    Specialized Packet Forwarding Hardware


    The software defined network

    3. Well-defined open API

    The “Software-defined Network”

    2. At least one good operating system

    Extensible, possibly open-source

    App

    App

    App

    1. Open interface to hardware

    Network Operating System

    Simple Packet Forwarding Hardware

    Simple Packet Forwarding Hardware

    Simple Packet Forwarding Hardware

    Simple Packet Forwarding Hardware

    Simple Packet Forwarding Hardware


    Trend

    Trend

    App

    App

    App

    Linux

    Mac

    OS

    Windows

    (OS)

    Linux

    Mac

    OS

    Windows

    (OS)

    Linux

    Mac

    OS

    Windows

    (OS)

    Virtualization layer

    x86

    (Computer)

    App

    App

    App

    Controller 1

    Controller

    2

    Controller 1

    Controller

    2

    NOX

    (Network OS)

    Network OS

    Virtualization or “Slicing”

    OpenFlow

    Computer Industry

    Network Industry

    Simple common stable hardware substrate below+ programmability + strong isolation model + competition above = Result : faster innovation


    What is openflow

    What is OpenFlow?


    Short story openflow is an api

    Short Story: OpenFlow is an API

    • Control how packets are forwarded

    • Implementable on COTS hardware

    • Make deployed networks programmable

      • not just configurable

    • Makes innovation easier

    • Result:

      • Increased control: custom forwarding

      • Reduced cost: API  increased competition


    Openflow in service provider networks at t tech talks october 2010

    Ethernet Switch/Router


    Openflow in service provider networks at t tech talks october 2010

    Control Path

    Control Path (Software)

    Data Path (Hardware)


    Openflow in service provider networks at t tech talks october 2010

    OpenFlow Controller

    OpenFlow Protocol (SSL/TCP)

    Control Path

    OpenFlow

    Data Path (Hardware)


    Openflow in service provider networks at t tech talks october 2010

    MAC

    src

    MAC

    dst

    IP

    Src

    IP

    Dst

    TCP

    sport

    TCP

    dport

    *

    *

    *

    5.6.7.8

    *

    *

    port 1

    Action

    OpenFlow Flow Table Abstraction

    Controller

    PC

    OpenFlow Firmware

    Software

    Layer

    Flow Table

    Hardware

    Layer

    port 2

    port 1

    port 3

    port 4

    5.6.7.8

    1.2.3.4


    Openflow basics flow table entries

    OpenFlow BasicsFlow Table Entries

    Rule

    Action

    Stats

    Packet + byte counters

    • Forward packet to port(s)

    • Encapsulate and forward to controller

    • Drop packet

    • Send to normal processing pipeline

    • Modify Fields

    Eth

    type

    Switch

    Port

    IP

    Src

    IP

    Dst

    IP

    Prot

    TCP

    sport

    TCP

    dport

    VLAN

    ID

    MAC

    src

    MAC

    dst

    + mask what fields to match


    Examples

    Examples

    Switch

    Port

    Switch

    Port

    Switch

    Port

    MAC

    src

    MAC

    src

    MAC

    src

    MAC

    dst

    MAC

    dst

    MAC

    dst

    Eth

    type

    Eth

    type

    Eth

    type

    VLAN

    ID

    VLAN

    ID

    VLAN

    ID

    IP

    Src

    IP

    Src

    IP

    Src

    IP

    Dst

    IP

    Dst

    IP

    Dst

    IP

    Prot

    IP

    Prot

    IP

    Prot

    TCP

    sport

    TCP

    sport

    TCP

    sport

    TCP

    dport

    TCP

    dport

    TCP

    dport

    Forward

    Action

    Action

    Switching

    00:1f:..

    *

    *

    *

    *

    *

    *

    *

    *

    *

    port6

    Flow Switching

    port3

    00:20..

    00:1f..

    0800

    vlan1

    1.2.3.4

    5.6.7.8

    4

    17264

    80

    port6

    Firewall

    *

    *

    *

    *

    *

    *

    *

    *

    *

    22

    drop


    Examples1

    Examples

    Switch

    Port

    Switch

    Port

    MAC

    src

    MAC

    src

    MAC

    dst

    MAC

    dst

    Eth

    type

    Eth

    type

    VLAN

    ID

    VLAN

    ID

    IP

    Src

    IP

    Src

    IP

    Dst

    IP

    Dst

    IP

    Prot

    IP

    Prot

    TCP

    sport

    TCP

    sport

    TCP

    dport

    TCP

    dport

    Action

    Action

    Routing

    *

    *

    *

    *

    *

    *

    5.6.7.8

    *

    *

    *

    port6

    VLAN Switching

    port6,

    port7,

    port9

    vlan1

    00:1f..

    *

    *

    *

    *

    *

    *

    *

    *


    Openflow usage dedicated openflow network

    OpenFlow UsageDedicated OpenFlow Network

    Statistics

    Statistics

    Statistics

    Action

    Action

    Action

    Rule

    Rule

    Rule

    Aaron’s code

    OpenFlow

    Protocol

    Controller

    PC

    OpenFlow

    Switch

    OpenFlow

    Switch

    OpenFlow

    Switch

    OpenFlowSwitch.org


    Network design decisions

    Network Design Decisions

    Forwarding logic (of course)

    Centralized vs. distributed control

    Fine vs. coarse grained rules

    Reactive vs. Proactive rule creation

    Likely more: open research area


    Centralized vs distributed control

    Centralized vs Distributed Control

    Centralized Control

    OpenFlow

    Switch

    OpenFlow

    Switch

    OpenFlow

    Switch

    OpenFlow

    Switch

    OpenFlow

    Switch

    OpenFlow

    Switch

    Distributed Control

    Controller

    Controller

    Controller

    Controller


    Flow routing vs aggregation both models are possible with openflow

    Flow Routing vs. AggregationBoth models are possible with OpenFlow

    Flow-Based

    Every flow is individually set up by controller

    Exact-match flow entries

    Flow table contains one entry per flow

    Good for fine grain control, e.g. campus networks

    Aggregated

    One flow entry covers large groups of flows

    Wildcard flow entries

    Flow table contains one entry per category of flows

    Good for large number of flows, e.g. backbone


    Reactive vs proactive both models are possible with openflow

    Reactive vs. Proactive Both models are possible with OpenFlow

    Reactive

    First packet of flow triggers controller to insert flow entries

    Efficient use of flow table

    Every flow incurs small additional flow setup time

    If control connection lost, switch has limited utility

    Proactive

    Controller pre-populates flow table in switch

    Zero additional flow setup time

    Loss of control connection does not disrupt traffic

    Essentially requires aggregated (wildcard) rules


    Openflow application network slicing

    OpenFlow Application: Network Slicing

    • Divide the production network into logical slices

      • each slice/service controls its own packet forwarding

      • users pick which slice controls their traffic: opt-in

      • existing production services run in their own slice

        • e.g., Spanning tree, OSPF/BGP

    • Enforce strong isolation between slices

      • actions in one slice do not affect another

    • Allows the (logical) testbed to mirror the production network

      • real hardware, performance, topologies, scale, users

    • Prototype implementation: FlowVisor


    Add a slicing layer between planes

    Slice 2

    Controller

    Slice 3

    Controller

    Slice

    Policies

    Add a Slicing Layer Between Planes

    Slice 1

    Controller

    Control/Data

    Protocol

    Rules

    Excepts

    Data

    Plane


    Network slicing architecture

    Network Slicing Architecture

    • A network slice is a collection of sliced switches/routers

      • Data plane is unmodified

      • Packets forwarded with no performance penalty

      • Slicing with existing ASIC

    • Transparent slicing layer

      • each slice believes it owns the data path

      • enforces isolation between slices

        • i.e., rewrites, drops rules to adhere to slice police

      • forwards exceptions to correct slice(s)


    Slicing policies

    Slicing Policies

    • The policy specifies resource limits for each slice:

      • Link bandwidth

      • Maximum number of forwarding rules

      • Topology

      • Fraction of switch/router CPU

      • FlowSpace: which packets does the slice control?


    Flowspace maps packets to slices

    FlowSpace: Maps Packets to Slices


    Real user traffic opt in

    Real User Traffic: Opt-In

    • Allow users to Opt-In to services in real-time

      • Users can delegate control of individual flows to Slices

      • Add new FlowSpace to each slice's policy

    • Example:

      • "Slice 1 will handle my HTTP traffic"

      • "Slice 2 will handle my VoIP traffic"

      • "Slice 3 will handle everything else"

    • Creates incentives for building high-quality services


    Flowvisor implemented on openflow

    OpenFlow

    Controller

    OpenFlow

    Controller

    OpenFlow

    Controller

    OpenFlow

    FlowVisor

    OpenFlow

    OpenFlow

    Firmware

    Data Path

    Switch/

    Router

    Switch/

    Router

    FlowVisor Implemented on OpenFlow

    Server

    Servers

    Custom

    Control

    Plane

    OpenFlow

    Controller

    Network

    OpenFlow

    Protocol

    Stub

    Control

    Plane

    OpenFlow

    Firmware

    Data

    Plane

    Data Path

    Switch/

    Router

    Switch/

    Router


    Flowvisor message handling

    Alice

    Controller

    Bob

    Controller

    Cathy

    Controller

    OpenFlow

    FlowVisor

    OpenFlow

    OpenFlow

    Firmware

    Data Path

    FlowVisor Message Handling

    Rule

    Policy Check:

    Is this rule allowed?

    Policy Check:

    Who controls this packet?

    Full Line Rate

    Forwarding

    Exception

    Packet

    Packet


    Openflow deployments

    OpenFlow Deployments


    Openflow has been prototyped on

    OpenFlow has been prototyped on….

    Most (all?) hardware switches now based on Open vSwitch…

    • Ethernet switches

      • HP, Cisco, NEC, Quanta, + more underway

    • IP routers

      • Cisco, Juniper, NEC

    • Switching chips

      • Broadcom, Marvell

    • Transport switches

      • Ciena, Fujitsu

    • WiFi APs and WiMAX Basestations


    Deployment stanford

    Deployment: Stanford

    • Our real, production network

      • 15 switches, 35 APs

      • 25+ users

      • 1+ year of use

      • my personal email and web-traffic!

    • Same physical network hosts Stanford demos

      • 7 different demos


    Demo infrastructure with slicing

    Demo Infrastructure with Slicing


    Deployments geni

    Deployments: GENI


    Public industry interest

    (Public) Industry Interest

    • Google has been a main proponent of new OpenFlow 1.1 WAN features

      • ECMP, MPLS-label matching

      • MPLS LDP-OpenFlow speaking router: NANOG50

    • NEC has announced commercial products

      • Initially for datacenters, talking to providers

    • Ericsson

      • “MPLS Openflow and the Split Router Architecture: A Research Approach“ at MPLS2010


    Openflow in the wan

    OpenFlow in the WAN


    Openflow in service provider networks at t tech talks october 2010

    CAPEX:

    30-40%

    OPEX: 60-70%

    … and yet service providers own & operate 2 such networks :

    IP and Transport


    Openflow in service provider networks at t tech talks october 2010

    Motivation

    GMPLS

    C

    C

    IP & Transport Networks are separate

    C

    IP/MPLS

    D

    IP/MPLS

    D

    D

    D

    D

    D

    D

    • managed and operated independently

    • resulting in duplication of functions and

    • resources in multiple layers

    • and significant capex and opex burdens

    • … well known

    C

    C

    D

    D

    IP/MPLS

    C

    IP/MPLS

    D

    C

    D

    C

    D

    C

    D


    Openflow in service provider networks at t tech talks october 2010

    Motivation

    GMPLS

    C

    C

    IP & Transport Networks do not interact

    C

    IP/MPLS

    D

    IP/MPLS

    D

    D

    D

    D

    D

    D

    • IP links are static

    • and supported by static circuits or lambdas in the Transport network

    C

    C

    D

    D

    IP/MPLS

    C

    IP/MPLS

    D

    C

    D

    C

    D

    C

    D


    What does it mean for the ip network

    What does it mean for the IP network?

    IP

    IP backbone network design

    • Router connections hardwired by lambdas

    • 4X to 10X over-provisioned

      • Peak traffic

      • protection

    DWDM

    • Big Problem

    • More over-provisioned links

    • Bigger Routers

    How is this scalable??

    *April, 02


    Bigger routers

    Bigger Routers?

    • Dependence on large Backbone Routers

    • Expensive

    • Power Hungry

    Juniper TX8/T640

    TX8

    Cisco CRS-1

    How is this scalable??


    Openflow in service provider networks at t tech talks october 2010

    Functionality Issues!

    • Dependence on large Backbone Routers

    • Complex & Unreliable

    Network World

    05/16/2007

    • Dependence on packet-switching

    • Traffic-mix tipping heavily towards video

    • Questionable if per-hop packet-by-packet processing is a good idea

    • Dependence on over-provisioned links

    • Over-provisioning masks  packet switching simply not very good at providing bandwidth, delay, jitter and loss guarantees


    How can optics help

    How can Optics help?

    • Optical Switches

      • 10X more capacity per unit volume (Gb/s/m3)

      • 10X less power consumption

      • 10X less cost per unit capacity (Gb/s)

      • Five 9’s availability

    • Dynamic Circuit Switching

      • Recover faster from failures

      • Guaranteed bandwidth & Bandwidth-on-demand

      • Good for video flows

      • Guaranteed low latency & jitter-free paths

      • Help meet SLAs – lower need for over-provisioned IP links


    Openflow in service provider networks at t tech talks october 2010

    Motivation

    GMPLS

    C

    C

    IP & Transport Networks do not interact

    C

    IP/MPLS

    D

    IP/MPLS

    D

    D

    D

    D

    D

    D

    • IP links are static

    • and supported by static circuits or lambdas in the Transport network

    C

    C

    D

    D

    IP/MPLS

    C

    IP/MPLS

    D

    C

    D

    C

    D

    C

    D


    What does it mean for the transport network

    What does it mean for the Transport network?

    IP

    • Without interaction with a higher layer

    • there is really no need to support dynamic services

    • and thus no need for an automated control plane

    • and so the Transport network remains manually controlled via NMS/EMS

    • and circuits to support a service take days to provision

    DWDM

    • Without visibility into higher layer services

    • the Transport network reduces to a bandwidth-seller

    • The Internet can help…

    • wide variety of services

    • different requirements that can take advantage of dynamic circuit characteristics

    *April, 02


    Openflow in service provider networks at t tech talks october 2010

    What is needed

    • … Converged Packet and Circuit Networks

      • manage and operate commonly

      • benefit from both packet and circuit switches

      • benefit from dynamic interaction between packet switching and dynamic-circuit-switching

    • … Requires

      • a common way to control

      • a common way to use


    Openflow in service provider networks at t tech talks october 2010

    But

    • … Convergence is hard

    • … mainly because the two networks have

    • very different architecture which makes

    • integrated operation hard

    • … and previous attempts at convergence

    • have assumed that the networks remain the same

    • … making what goes across them bloated and complicated

    • and ultimately un-usable

    We believe true convergence will come about from architectural change!


    Openflow in service provider networks at t tech talks october 2010

    GMPLS

    C

    C

    C

    IP/MPLS

    D

    IP/MPLS

    D

    D

    D

    D

    D

    D

    C

    C

    D

    D

    IP/MPLS

    C

    IP/MPLS

    D

    UCP

    C

    D

    Flow

    Network

    C

    D

    C

    D


    Openflow in service provider networks at t tech talks october 2010

    pac.c

    Research Goal: Packet and Circuit Flows Commonly Controlled & Managed

    Simple,

    network

    of Flow

    Switches

    Simple,

    Unified,

    Automated Control

    Plane

    Flow

    Network

    … that switch at different granularities: packet, time-slot, lambda & fiber


    Openflow in service provider networks at t tech talks october 2010

    Switch

    Port

    MAC

    src

    MAC

    dst

    Eth

    type

    VLAN

    ID

    IP

    Src

    IP

    Dst

    IP

    Prot

    TCP

    sport

    TCP

    dport

    In

    Port

    Out

    Port

    Out

    Lambda

    In

    Lambda

    Starting

    Time-Slot

    Starting

    Time-Slot

    Action

    … a common way to control

    Packet Flows

    Exploit the cross-connect table in circuit switches

    CircuitFlows

    VCG

    52

    VCG

    52

    Signal

    Type

    Signal

    Type

    The Flow Abstraction presents a unifying abstraction

    … blurring distinction between underlying packet and circuit and regarding both as flows in a flow-switched network


    Openflow in service provider networks at t tech talks october 2010

    … a common way to use

    Unified Architecture

    Variable Bandwidth

    Packet Links

    Dynamic

    Optical

    Bypass

    Unified Recovery

    Application-Aware QoS

    Traffic Engineering

    Networking

    Applications

    Unified

    Control

    Plane

    NETWORK OPERATING SYSTEM

    VIRTUALIZATION (SLICING) PLANE

    Unifying Abstraction

    OpenFlow Protocol

    Packet

    Switch

    Underlying Data

    Plane Switching

    Circuit

    Switch

    Packet

    Switch

    Packet & Circuit Switch

    Packet & Circuit Switch


    Openflow in service provider networks at t tech talks october 2010

    Example Application

    Congestion Control

    ..via Variable Bandwidth Packet Links


    Openflow in service provider networks at t tech talks october 2010

    OpenFlow Demo at SC09


    Openflow in service provider networks at t tech talks october 2010

    Lab Demo with Wavelength Switches

    OpenFlow

    Controller

    OpenFlow Protocol

    NetFPGA based OpenFlow packet switch

    NF2

    NF1

    to OSA

    E-O

    O-E

    GE

    25 km SMF

    GE

    AWG

    1X9 Wavelength

    Selective Switch (WSS)

    to OSA

    WSS based OpenFlow circuit switch

    λ1 1553.3 nm

    GE to DWDM SFP convertor

    λ2 1554.1 nm

    192.168.3.10

    192.168.3.12

    192.168.3.15

    Video Clients

    Video Server


    Openflow in service provider networks at t tech talks october 2010

    Lab Demo with Wavelength Switches

    OpenFlow packet switch

    OpenFlow packet switch

    25 km SMF

    GE-Optical

    GE-Optical

    Mux/Demux

    Openflow Circuit Switch


    Openflow in service provider networks at t tech talks october 2010

    OpenFlow Enabled Converged

    Packet and Circuit Switched Network

    Stanford University and Ciena Corporation

    • Demonstrate a converged network, where OpenFlow is used to control both packet and circuit switches.

    • Dynamically define flow granularity to aggregate traffic moving towards the network core.

    • Provide differential treatment to different types of aggregated packet flows in the circuit network:

      • VoIP : Routed over minimum delay dynamic-circuit path

      • Video: Variable-bandwidth, jitter free path bypassing intermediate packet switches

      • HTTP: Best-effort over static-circuits

  • Many more new capabilities become possible in a converged network


  • Openflow in service provider networks at t tech talks october 2010

    OpenFlow Enabled Converged

    Packet and Circuit Switched Network

    Controller

    OpenFlow Protocol

    NEW YORK

    SAN

    FRANCISCO

    Aggregated packet flows

    VoIP traffic in dynamic, minimum propagation delay paths

    Web traffic in static predefined circuits

    Video traffic in dynamic,

    jitter-free, variable-bandwidth circuits

    HOUSTON


    Openflow in service provider networks at t tech talks october 2010

    Demo Video


    Openflow in service provider networks at t tech talks october 2010

    Issues with GMPLS

    • GMPLS original goal: UCP across packet & circuit (2000)

      • Today – the idea is dead

        • Packet vendors and ISPs are not interested

        • Transport n/w SPs view it as a signaling tool available to the mgmt system for provisioning private lines (not related to the Internet)

      • After 10 yrs of development, next-to-zero significant deployment as UCP

      • GMPLS Issues 


    Openflow in service provider networks at t tech talks october 2010

    Issues with GMPLS

    • Issues are when considered as a unified architecture and control plane

      • control plane complexity escalates when unifying across packets and circuits because it

        • makes basic assumption that the packet network remains same: IP/MPLS network – many years of legacy L2/3 baggage

        • and that the transport network remain same - multiple layers and multiple vendor domains

      • use of fragile distributed routing and signaling protocols with many extensions, increasing switch cost & complexity, while decreasing robustness

      • does not take into account the conservative nature of network operation

        • can IP networks really handle dynamic links?

        • Do transport network service providers really want to give up control to an automated control plane?

      • does not provide easy path to control plane virtualization


    Conclusions

    Conclusions

    • Current networks are complicated

    • OpenFlow is an API

      • Interesting apps include network slicing

    • Nation-wide academic trials underway

    • OpenFlow has potential for Service Providers

      • Custom control for Traffic Engineering

      • Combined Packet/Circuit switched networks

    • Thank you!


    Conclusions1

    Conclusions

    • Current networks are complicated

    • OpenFlow is an API

      • Interesting apps include network slicing

    • Nation-wide academic trials underway

    • OpenFlow has potential for Service Providers

      • Custom control for Traffic Engineering

      • Combined Packet/Circuit switched networks

    • Thank you!


    Openflow in service provider networks at t tech talks october 2010

    Backup


    Openflow in service provider networks at t tech talks october 2010

    Practical Considerations

    • It is well known that Transport Service Providers dislike giving up manual control of their networks

      • to an automated control plane

      • no matter how intelligent that control plane may be

      • how to convince them?

    • It is also well known that converged operation of packet & circuit networks is a good idea

      • for those that own both types of networks – eg AT&T, Verizon

      • BUT what about those who own only packet networks –eg Google

        • they do not wish to buy circuit switches

        • how to convince them?

    • We believe the answer to both lies in virtualization

    • (or slicing)


    Openflow in service provider networks at t tech talks october 2010

    Basic Idea: Unified Virtualization

    C

    C

    OpenFlow Protocol

    C

    FLOWVISOR

    OpenFlow Protocol

    CK

    P

    CK

    CK

    P

    CK

    CK

    P

    P


    Openflow in service provider networks at t tech talks october 2010

    Deployment Scenario: Different SPs

    ISP ‘A’ Client Controller

    Private Line Client Controller

    ISP ‘B’ Client Controller

    C

    C

    OpenFlow Protocol

    C

    FLOWVISOR

    Under Transport Service Provider (TSP) control

    OpenFlow Protocol

    D

    CK

    P

    D

    CK

    D

    Single

    Physical Infrastructure

    of Packet & Circuit Switches

    D

    CK

    Isolated

    Client

    Network

    Slices

    D

    D

    D

    P

    D

    D

    CK

    CK

    D

    D

    P

    D

    D

    D

    P

    D

    D

    D

    D

    D

    D

    D

    D

    D

    D

    D

    D

    D


    Openflow in service provider networks at t tech talks october 2010

    Demo Topology

    App

    App

    App

    App

    App

    App

    TSP’s NMS/EMS

    ISP# 1’s NetOS

    ISP# 2’s NetOS

    P

    K

    T

    P

    K

    T

    T

    D

    M

    T

    D

    M

    P

    K

    T

    T

    D

    M

    S

    O

    N

    E

    T

    S

    O

    N

    E

    T

    S

    O

    N

    E

    T

    E

    T

    H

    E

    T

    H

    E

    T

    H

    PKT

    E

    T

    H

    PKT

    PKT

    E

    T

    H

    PKT

    PKT

    E

    T

    H

    E

    T

    H

    PKT

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    TSP’s FlowVisor

    Transport Service Provider’s (TSP) virtualized network

    Internet Service Provider’s

    (ISP# 1) OF enabled network

    with slice of TSP’s network

    Internet Service Provider’s (ISP# 2) OF enabled network with another slice of TSP’s network

    TSP’s private line customer


    Openflow in service provider networks at t tech talks october 2010

    Demo Methodology

    • We will show:

    • TSP can virtualize its network with the FlowVisor while maintaining operator control via NMS/EMS.

      • The FlowVisorwill manage slices of the TSP’s network for ISP customers, where { slice = bandwidth + control of part of TSP’s switches }

      • NMS/EMS can be used to manually provision circuits for Private Line customers

    • Importantly, every customer (ISP# 1, ISP# 2, Pline) is isolated from other customer’s slices.

      • ISP#1 is free to do whatever it wishes within its slice

        • eg. use an automated control plane (like OpenFlow)

        • bring up and tear-down links as dynamically as it wants

      • ISP#2 is free to do the same within its slice

      • Neither can control anything outside its slice, nor interfere with other slices

      • TSP can still use NMS/EMS for the rest of its network


    Openflow in service provider networks at t tech talks october 2010

    ISP #1’s Business Model

    • ISP# 1 pays for a slice = { bandwidth + TSP switching resources }

    • Part of the bandwidth is for static links between its edge packet switches (like ISPs do today)

    • and some of it is for redirecting bandwidth between the edge switches (unlike current practice)

    • The sum of both static bandwidth and redirected bandwidth is paid for up-front.

    • The TSP switching resources in the slice are needed by the ISP to enable the redirect capability.


    Openflow in service provider networks at t tech talks october 2010

    ISP# 1’s network

    P

    K

    T

    P

    K

    T

    T

    D

    M

    T

    D

    M

    P

    K

    T

    T

    D

    M

    S

    O

    N

    E

    T

    S

    O

    N

    E

    T

    S

    O

    N

    E

    T

    E

    T

    H

    E

    T

    H

    E

    T

    H

    Packet (virtual) topology

    PKT

    E

    T

    H

    PKT

    E

    T

    H

    PKT

    E

    T

    H

    E

    T

    H

    E

    T

    H

    PKT

    PKT

    E

    T

    H

    PKT

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    ..and spare bandwidth in the slice

    Notice the spare interfaces

    Actual topology


    Openflow in service provider networks at t tech talks october 2010

    ISP# 1’s network

    P

    K

    T

    P

    K

    T

    T

    D

    M

    T

    D

    M

    P

    K

    T

    T

    D

    M

    S

    O

    N

    E

    T

    S

    O

    N

    E

    T

    S

    O

    N

    E

    T

    E

    T

    H

    E

    T

    H

    E

    T

    H

    Packet (virtual) topology

    E

    T

    H

    E

    T

    H

    PKT

    PKT

    PKT

    E

    T

    H

    PKT

    PKT

    PKT

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    Actual topology

    ISP# 1 redirects bw between the spare interfaces to dynamically create new links!!


    Openflow in service provider networks at t tech talks october 2010

    ISP #1’s Business Model Rationale

    • Q. Why have spare interfaces on the edge switches? Why not use them all the time?

    • A. Spare interfaces on the edge switches cost less than bandwidth in the core

      • sharingexpensive core bandwidth between cheaper edge ports is more cost-effective for the ISP

      • gives the ISP flexibility in using dynamic circuits to create new packet links where needed, when needed

      • The comparison is between (in the simple network shown)

        • 3 static links + 1 dynamic link = 3 ports/edge switch + static & dynamic core bandwidth

        • vs. 6 static links = 4 ports/edge switch + static core bandwidth

        • as the number of edge switches increase, the gap increases


    Openflow in service provider networks at t tech talks october 2010

    ISP #2’s Business Model

    • ISP# 2 pays for a slice = { bandwidth + TSP switching resources }

    • Only the bandwidth for static links between its edge packet switches is paid for up-front.

    • Extra bandwidth is paid for on a pay-per-use basis

    • TSP switching resources are required to provision/tear-down extra bandwidth

    • Extra bandwidth is not guaranteed


    Openflow in service provider networks at t tech talks october 2010

    ISP# 2’s network

    P

    K

    T

    P

    K

    T

    T

    D

    M

    T

    D

    M

    P

    K

    T

    T

    D

    M

    S

    O

    N

    E

    T

    S

    O

    N

    E

    T

    S

    O

    N

    E

    T

    E

    T

    H

    E

    T

    H

    E

    T

    H

    Packet (virtual) topology

    PKT

    E

    T

    H

    PKT

    E

    T

    H

    E

    T

    H

    E

    T

    H

    PKT

    PKT

    E

    T

    H

    PKT

    E

    T

    H

    PKT

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    E

    T

    H

    Only static link bw paid for up-front

    Actual topology

    ISP# 2 uses variable bandwidth packet links ( our SC09 demo )!!


    Openflow in service provider networks at t tech talks october 2010

    ISP #2’s Business Model Rationale

    • Q. Why use variable bandwidth packet links? In other words why have more bandwidth at the edge (say 10G) and pay for less bandwidth in the core up-front (say 1G)

    • Again it is for cost-efficiency reasons.

      • ISP’s today would pay for the 10G in the core up-front and then run their links at 10% utilization.

      • Instead they could pay for say 2.5G or 5G in the core, and ramp up when they need to or scale back when they don’t – pay per use.


  • Login