High level abstractions for programming software defined networks
This presentation is the property of its rightful owner.
Sponsored Links
1 / 48

High-Level Abstractions for Programming Software Defined Networks PowerPoint PPT Presentation


  • 73 Views
  • Uploaded on
  • Presentation posted in: General

High-Level Abstractions for Programming Software Defined Networks. Jennifer Rexford Princeton University http:// www.cs.princeton.edu /~ jrex. Joint with Nate Foster, David Walker, Arjun Guha , Rob Harrison, Chris Monsanto, Joshua Reich, Mark Reitblatt , Cole Schlesinger.

Download Presentation

High-Level Abstractions for Programming Software Defined Networks

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


High level abstractions for programming software defined networks

High-Level Abstractions for Programming Software Defined Networks

Jennifer Rexford

Princeton University

http://www.cs.princeton.edu/~jrex

Joint with Nate Foster, David Walker, ArjunGuha, Rob Harrison, Chris Monsanto, Joshua Reich, Mark Reitblatt, Cole Schlesinger


Software defined networks

Software Defined Networks


Software defined networks1

Software Defined Networks

decouple control and data planes


Software defined networks2

Software Defined Networks

decouple control and data planesby providing open standard API


Logically centralized controller

(Logically) Centralized Controller

Controller Platform


Protocols applications

Protocols  Applications

Controller Application

Controller Platform


Payoff

Payoff

  • Cheaper equipment

  • Faster innovation

  • Easier management


But how should we program sdns

But How Should We Program SDNs?

Network-wide visibility and control

Controller Application

Controller Platform

Direct control via open interface

Today’s controller APIs are tied to the underlying hardware


Openflow networks

OpenFlow Networks


Data plane packet handling

Data Plane: Packet Handling

  • Simple packet-handling rules

    • Pattern: match packet header bits

    • Actions: drop, forward, modify, send to controller

    • Priority: disambiguate overlapping patterns

    • Counters: #bytes and #packets

  • src=1.2.*.*, dest=3.4.5.*  drop

  • src = *.*.*.*, dest=3.4.*.*  forward(2)

  • 3. src=10.1.2.3, dest=*.*.*.*  send to controller


Control plane programmability

Control Plane: Programmability

Controller Application

Controller Platform

Events from switches

Topology changes,

Traffic statistics,

Arriving packets

Commands to switches

(Un)install rules,

Query statistics,

Send packets


E g server load balancing

E.g.: Server Load Balancing

  • Pre-install load-balancing policy

  • Split traffic based on source IP

src=0*

src=1*


Seamless mobility migration

Seamless Mobility/Migration

  • See host sending traffic at new location

  • Modify rules to reroute the traffic


Programming abstractions for software defined networks

Programming Abstractions for Software Defined Networks


Network control loop

Network Control Loop

Compute Policy

Write

policy

Read

state

OpenFlow

Switches


Reading state

Reading State

SQL-Like Query Language


Reading state multiple rules

Reading State: Multiple Rules

  • Traffic counters

    • Each rule counts bytes and packets

    • Controller can poll the counters

  • Multiple rules

    • E.g., Web server traffic except for source 1.2.3.4

  • Solution: predicates

    • E.g., (srcip != 1.2.3.4) && (srcport == 80)

    • Run-time system translates into switch patterns

1. srcip = 1.2.3.4, srcport = 80

2. srcport = 80


Reading state unfolding rules

Reading State: Unfolding Rules

  • Limited number of rules

    • Switches have limited space for rules

    • Cannot install all possible patterns

  • Must add new rules as traffic arrives

    • E.g., histogram of traffic by IP address

    • … packet arrives from source 5.6.7.8

  • Solution: dynamic unfolding

    • Programmer specifies GroupBy(srcip)

    • Run-time system dynamically adds rules

1. srcip = 1.2.3.4

2. srcip = 5.6.7.8

1. srcip = 1.2.3.4


Reading extra unexpected events

Reading: Extra Unexpected Events

  • Common programming idiom

    • First packet goes to the controller

    • Controller application installs rules

packets


Reading extra unexpected events1

Reading: Extra Unexpected Events

  • More packets arrive before rules installed?

    • Multiple packets reach the controller

packets


Reading extra unexpected events2

Reading: Extra Unexpected Events

  • Solution: suppress extra events

    • Programmer specifies “Limit(1)”

    • Run-time system hides the extra events

not seen by

application

packets


Frenetic sql like query language

Frenetic SQL-Like Query Language

  • Get what you ask for

    • Nothing more, nothing less

  • SQL-like query language

    • Familiar abstraction

    • Returns a stream

    • Intuitive cost model

  • Minimize controller overhead

    • Filter using high-level patterns

    • Limit the # of values returned

    • Aggregate by #/size of packets

Traffic Monitoring

Select(bytes) *

Where(in:2 & srcport:80) *

GroupBy([dstmac]) *

Every(60)

Learning Host Location

Select(packets) *

GroupBy([srcmac]) *

SplitWhen([inport]) *

Limit(1)


Computing policy

Computing Policy

Parallel and Sequential Composition

Abstract Topology Views


Combining many networking tasks

Combining Many Networking Tasks

Monolithic application

Monitor + Route + FW + LB

Controller Platform

Hard to program, test, debug, reuse, port, …


Modular controller applications

Modular Controller Applications

A module for each task

Monitor

Route

FW

LB

Controller Platform

Easier to program, test, and debug

Greater reusability and portability


Beyond multi tenancy

Beyond Multi-Tenancy

Each module controls a different portion of the traffic

...

Slice 2

Slice n

Slice 1

Controller Platform

Relatively easy to partition rule space, link bandwidth, and network events across modules


Modules affect the same traffic

Modules Affect the Same Traffic

Each module partially specifies the handling of the traffic

FW

LB

Monitor

Route

Controller Platform

How to combine modules into a complete application?


Parallel composition icfp 11 popl 12

Parallel Composition [ICFP’11, POPL’12]

srcip = 5.6.7.8  count

srcip = 5.6.7.9  count

dstip = 1.2/16  fwd(1)

dstip = 3.4.5/24  fwd(2)

Route on destprefix

Monitor on source IP

+

Controller Platform

srcip = 5.6.7.8, dstip = 1.2/16  fwd(1), count

srcip = 5.6.7.8, dstip = 3.4.5/24  fwd(2), count

srcip = 5.6.7.9, dstip = 1.2/16  fwd(1), count

srcip = 5.6.7.9, dstip = 3.4.5/24  fwd(2), count


Example server load balancer

Example: Server Load Balancer

  • Spread client traffic over server replicas

    • Public IP address for the service

    • Split traffic based on client IP

    • Rewrite the server IP address

  • Then, route to the replica

10.0.0.1

10.0.0.2

1.2.3.4

clients

load balancer

10.0.0.3

server replicas


Sequential composition nsdi 13

Sequential Composition [NSDI’13]

srcip = 0*, dstip=1.2.3.4  dstip=10.0.0.1

srcip = 1*, dstip=1.2.3.4  dstip=10.0.0.2

dstip = 10.0.0.1  fwd(1)

dstip = 10.0.0.2  fwd(2)

Routing

Load Balancer

>>

Controller Platform

srcip = 0*, dstip = 1.2.3.4  dstip= 10.0.0.1, fwd(1)

srcip = 1*, dstip = 1.2.3.4  dstip = 10.0.0.2, fwd(2)


Dividing the traffic over modules

Dividing the Traffic Over Modules

  • Predicates

    • Specify which traffic traverses which modules

    • Based on input port and packet-header fields

Routing

Monitor

+

dstport != 80

Routing

Load Balancer

>>

dstport = 80


High level architecture

High-Level Architecture

M2

Composition Spec

M1

M3

Controller Platform


Partially specifying functionality

Partially Specifying Functionality

  • A module should not specify everything

    • Leave some flexibility to other modules

    • Avoid tying the module to a specific setting

  • Example: load balancer plus routing

    • Load balancer spreads traffic over replicas

    • … without regard to the network paths

Routing

Load Balancer

>>

Avoid custom interfaces between the modules


Abstract topology views nsdi 13

Abstract Topology Views [NSDI’13]

  • Present abstract topology to the module

    • Implicitly encodes the constraints

    • Looks just like a normal network

    • Prevents the module from overstepping

Real network

Abstract view

34


Separation of concerns

Separation of Concerns

  • Hide irrelevant details

    • Load balancer doesn’t see the internal topology or any routing changes

Routing view

Load-balancer view


High level architecture1

High-Level Architecture

View Definitions

M2

Composition Spec

M1

M3

Controller Platform


Supporting topology views

Supporting Topology Views

  • Virtual ports

    • (V, 1): [(P1,2)]

    • (V, 2): [(P2, 5)]

  • Simple firewall policy

    • in=1 out=2

  • Virtual headers

    • Push virtual ports

    • Route on these ports

    • From (P1,2) to (P2,5)

2

1

V

firewall

1

1

5

2

4

2

P2

P1

3

4

3

routing


Writing state

Writing State

Consistent Updates


Writing policy avoiding disruption

Writing Policy: Avoiding Disruption

  • Invariants

  • No forwarding loops

  • No black holes

  • Access control

  • Traffic waypointing


Writing policy path for new flow

Writing Policy: Path for New Flow

  • Rules along a path installed out of order?

    • Packets reach a switch before the rules do

packets

Must think about all possible packet and event orderings.


Writing policy update semantics

Writing Policy: Update Semantics

  • Per-packet consistency

    • Every packet is processed by

    • … policy P1 or policy P2

    • E.g., access control, no loopsor blackholes

  • Per-flow consistency

    • Sets of related packets are processed by

    • … policy P1 or policy P2,

    • E.g., server load balancer, in-order delivery, …

P1

P2


Writing policy policy update

Writing Policy: Policy Update

  • Simple abstraction

    • Update entire configuration at once

  • Cheap verification

    • If P1 and P2 satisfy an invariant

    • Then the invariant always holds

  • Run-time system handles the rest

    • Constructing schedule of low-level updates

    • Using only OpenFlow commands!

P1

P2


Writing policy two phase update

Writing Policy: Two-Phase Update

  • Version numbers

    • Stamp packet with a version number (e.g., VLAN tag)

  • Unobservable updates

    • Add rules for P2 in the interior

    • … matching on version # P2

  • One-touch updates

    • Add rules to stamp packets with version # P2 at the edge

  • Remove old rules

    • Wait for some time, thenremove all version # P1 rules


Writing policy optimizations

Writing Policy: Optimizations

  • Avoid two-phase update

    • Naïve version touches every switch

    • Doubles rule space requirements

  • Limit scope

    • Portion of the traffic

    • Portion of the topology

  • Simple policy changes

    • Strictly adds paths

    • Strictly removes paths


Frenetic abstractions

Frenetic Abstractions

Policy Composition

Consistent

Updates

SQL-likequeries

OpenFlow

Switches


Related work

Related Work

  • Programming languages

    • FRP: Yampa, FrTime, Flask, Nettle

    • Streaming: StreamIt, CQL, Esterel, Brooklet, GigaScope

    • Network protocols: NDLog

  • OpenFlow

    • Language: FML, SNAC, Resonance

    • Controllers: ONIX, POX, Floodlight, Nettle, FlowVisor

    • Testing: NICE, FlowChecker, OF-Rewind, OFLOPS

  • OpenFlowstandardization

    • http://www.openflow.org/

    • https://www.opennetworking.org/


Conclusion

Conclusion

  • SDN is exciting

    • Enables innovation

    • Simplifies management

    • Rethinks networking

  • SDN is happening

    • Practice: useful APIs and good industry traction

    • Principles: start of higher-level abstractions

  • Great research opportunity

    • Practical impact on future networks

    • Placing networking on a strong foundation


Frenetic project

Frenetic Project

  • Programming languages meets networking

    • Cornell: Nate Foster, Gun Sirer, ArjunGuha, Robert Soule, ShrutarshiBasu, Mark Reitblatt, Alec Story

    • Princeton: Dave Walker, Jen Rexford, Josh Reich, Rob Harrison, Chris Monsanto, Cole Schlesinger, Praveen Katta, NaydenNedev

http://frenetic-lang.org

Short overview at http://www.cs.princeton.edu/~jrex/papers/frenetic12.pdf


  • Login