1 / 12

Control Plane Software – or packet service hypervisors – and network virtualization

Control Plane Software – or packet service hypervisors – and network virtualization. Jon Crowcroft http:// www.cl.cam.ac.uk /~jac22 Jon.crowcroft@cl.cam.ac.uk. Virtual networks/services. Multiplex & isolate Explicit label (mpls, vlan id) Implicit label (5 tuple) No label (bucket of bits)

ronna
Download Presentation

Control Plane Software – or packet service hypervisors – and network virtualization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Control Plane Software – or packet service hypervisors – and network virtualization Jon Crowcroft http://www.cl.cam.ac.uk/~jac22 Jon.crowcroft@cl.cam.ac.uk

  2. Virtual networks/services • Multiplex & isolate • Explicit label (mpls, vlan id) • Implicit label (5 tuple) • No label (bucket of bits) • Classifiers increasingly expensive (incl h/w support) • But buy increasing flexibility • Isolation • For QoS • for security • or both? • Schedulers and crypto (both could be distributed/NFV) • Both cost a lot

  3. NIC, Node, Net level functions • Some things can be near edge • Labeling (in server NIC or edge switch) • Core stateless FQ • E2e crypto • New application specific functions • Some things in core only • TE • Anti-DDoS measures (ingress too)

  4. Smart switches and nets • Control via openflow (remember gsmp) • Switch resource partitions • ClickOS, NetVM, smartswitch, flowvisor etc • Centralised programing (ok for datacenter or small ISP) • Decentralised programming • OpenDaylight POX, DISCOetc • scales, but • Needs synchronisation • Either consistent update • Or triggered by app flow

  5. Node/NIC level API and Resources • What do we have in a node/NIC? • How do we make it easy to use? • How do you make a lot of them easy to use together? • Who are the users? • Network services • Multicast • Aggregator • DPI filter • Normaliser • transCoder • … • Application services • Aggregator • Disaggregator • Crypto (SHE?) (c.f mylar/privacy preserving search) • …

  6. Typical Node Resources • Lots of h/w threads & cores • Some shared some non shared memory • Some exotic memory (hash/tcam) • Some special instructions • Probably not x86 inset • Some isolation (sometimes) • E.g. execution context/cpu priority etc • Or virtual memory support • Or….

  7. Net services( again) • Multicast – see brad cain work (nortel) on min router functions needed • Incast • min hw support: per S,G) op • Filtering • Regex • Stateful • Per source or ingress • State latched same way as proto • So same as normalizer • How much state??? • In shared memory or core affinity • Kick into slow path if out of resource

  8. Application Service (warning – mostly academic or data cente, prob not “carrier”) • Aggregator • Disaggregator • Note: both finite iteration over data • Poster child case is redux on shuffle of Map/Reduce • E.g. find min, ave or max of set of task outputs • And return to next phase for all tasks • Trades off work (min.ave,max) over packet redux/incast redux • Crypto service • Imagine Cloud 3.0 does Somewhat Homomorphic Crypto • Data stored and transmitted encrypted • What about processing too? • Use SHE, Garbled circuit or mylar type ops • Offloaded in net…. • E.g. Privacy preserving search

  9. New control plane is distributed OS problem (warning – asumes capabilities + verified s/w!) • Needs guarantees for its own traffic • E.g. VM load and control (openflow etc) packets • Min requirement – see qjump code. Avoid CAP problem • Needs version ctl/fall back • Needs failsafe configs (dom0) and • Debugging&reset systems • New programming languages and runtime • Declarative/functional or declarative logic • E.g. Frenetic (or ocaml) • Allow offline verification of service • E.g. NICE, NetPlumber, VeriFlow etc • Safer (less likely to test on customer ) • Possibly can extrapolate performance too

  10. Verification… • Of node netvm code • Crash proof • no isolation violations • Of local net property – • e.g. multicast doesn’t leak • Mobile handover doesn’t loop • Middlebox doesn’t break tcp/sctp/etc • Can we test global properties? • Trickier – ok for routing/TE/ QoS ones probably • See e.g.Metarouting Project in cambridge • http://www.cl.cam.ac.uk/~tgg22/metarouting/

  11. Summary and Conclusion • Raise the game for s/w arch in switches and NICs • Use same futuristic s/w arch as in new cloud VMs • Use less exotic h/w resource as s/w can generate efficiently at compile time, if not present at runtime… • Challenges: • Simplifying programming diverse novel/exotic switch/NIC hardware – “paravirtualizing”NPs/NetFPGA etc? • Scaling consistency state update in distributed controllers – Software Transactional Networks? • Correctness of SDN applets “by design” - metarouting? • Questions?

  12. http://nymote.org/ Cheers! Thank You! And Questions?

More Related