1 / 27

Network Applications in the Data-Plane

This paper discusses the motivation and benefits of software-defined networking (SDN) and the evolution of network control interfaces. It also explores the concept of programmable dataplane and its advantages in terms of network evolution, hardware investment protection, and new functions realization in the datapath.

jasinski
Download Presentation

Network Applications in the Data-Plane

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Applications in the Data-Plane CHAN Mun Choon (in collaboration with PraveinGovindanKannan, Raj Joshi & Qu Ting) School of Computing National University of Singapore

  2. Motivation for Software Defined Networking (SDN) • Networks: • Notoriously difficult to manage • Evolves very slowly Abstraction is the key to extracting simplicity: easier to write, maintain and reason about the programs that manage and control the network Ref: Scott Shenker, et.al. The Future of Networking, and the Past of Protocols, Open Network Summit, 2011

  3. In the Pre-SDN era… CLI/custom script interface --> Device/vendor specific “Closed” Control Plane ASIC APIs (closed) Parser Deparser Match-ActionPipeline VLAN ACL L2/MAC L3

  4. Before OpenFlow … • Open Signaling (1990s) • Make network control functions more open, extensible, and programmable • Separation between hardware and control software • Access to the network hardware via open programmable network interfaces. • Focuses on connection-oriented network services in the early days • IETF RFC3294 (2003) General Switch Management Protocol (GSMP) • IEEE P1520(1998) standards initiative for programmable network interfaces

  5. 2008-09: SDN Era (OpenFlow/SDN 1.0) Open, vendor-agnostic interface:  Easier network management  Centralized control  Code reuse/interoperatibility “Closed” Control Plane ASIC APIs (closed) Parser Deparser Match-ActionPipeline VLAN ACL L2/MAC L3

  6. 2008-09: SDN Era (OpenFlow/SDN 1.0) It didn’t change the core network functionality! (Control plane became a bit more programmable) Open, vendor-agnostic interface:  Easier network management  Centralized control  Code reuse/interoperatibility “Closed” Control Plane ASIC APIs (closed) VLAN ACL L2/MAC L3 Parser Deparser Match-ActionPipeline

  7. 2013-14: Programmable Switches (SDN 2.0) Changing interface:  C/Python APIs (auto-generated)  P4RunTime (led by Google) “Open” Control Plane ASIC APIs (closed/licensed) ACL L3 MPLS ProgrammableParser ProgrammableDeparser Programmable Match-ActionPipeline

  8. Benefits of Dataplane Programmability • Flexible Parsing and matching on non-standard fields: • Faster and easier network evolution: new protocols/headers • Traditionally a new protocol addition takes 4-5 years!! • Hardware upgrades  software upgrades: protection on investment • H/w goes beyond this • Exposing other datapath processing primitives (existing + new) • Accessible and programmable via P4 (high-level DSL) • Realize new functions (not fully arbitrary) in the datapath • Researchers: Propose an ASIC-level solution for new/existing problems and readily “realize” it in production hardware

  9. Extra Dataplane Primitives • Transactional Memory (SRAM) + Stateful ALUs • Stateful operations across multiple packets • Simple computations: add, subtract, approx. multiply/divide • Queuing Telemetry Information • Enqueue/dequeue queue depth • Time spent in the queue • High-resolution Timestamping • nanosecond-scale time stamps • Ingress/egress MAC timestamps, ingress/egress pipeline timestamps, etc. • Packet cloning/replication • Flexible mirroring or conditional multicasts (at run time)

  10. Limitations • High-level constraint: all processing MUST maintain line rate • No “loop” constructs • No floating point computations • Only approximate computations possible • Single Ported, per-stage SRAM memory • Single memory entry can be read/updated in one pkt pass

  11. BurstRadarPractical Real-time Microburst Monitoring for Datacenter Networks (APSys 2018) Raj Joshi1, TingQu2, Mun Choon Chan1, Ben Leong1, Boon ThauLoo3 1 2 3

  12. Microbursts(µbursts) • Events of intermittent congestion lasting 10’s or 100’s of µs • Common Causes: TCP Incast, BurstyUDP traffic, TCPsegment offloading • ◦Intermittent increase in latency  variability • ◦Network jitter and Packet loss

  13. Detecting & characterizing µbursts is hard • Measurement study from FB’s datacenter • Last for less than 200 µs • Occur unpredictably • Traditional sampling-based techniques • Cannot even detect microbursts • Commercial Solutions • Can detect the occurrence of microbursts • Provide no information about the cause

  14. Solution: Egress Port Queues • Key Insight: Switch’s Queuing Engine Key Idea: ◦ Wecan detect themicroburst directly on the switch where it happens µbursts are localized to a switch’s egress port queue

  15. BurstRadarOverview Egress Ports Queuing Telemetry (metadata) Markbit (metadata) Snapshot Algorithm Courier Pkt Generator Egress Processing Pipeline Ring Buffer Egress Port Queues Egress Deparser

  16. BurstRadarOverview Egress Ports Courier Packet Mirror Port Queue Snapshot Algorithm Courier Pkt Generator Egress Processing Pipeline Ring Buffer Egress Port Queues Egress Deparser

  17. BurstRadarOverview Egress Ports Courier Packet Mirror Port Queue Snapshot Algorithm Courier Pkt Generator Egress Processing Pipeline Ring Buffer Egress Port Queues Egress Deparser

  18. BurstRadarOverview Egress Ports Courier Packet Telemetry Info:- Pkt 5-tuple - Queuing telemetry data Mirror Port Queue Mirror Port Snapshot Algorithm Courier Pkt Generator Egress Processing Pipeline Ring Buffer Egress Port Queues Egress Deparser

  19. BurstRadarOverview Egress Ports Mirror Port Snapshot Algorithm Courier Pkt Generator Egress Processing Pipeline Ring Buffer Egress Port Queues Egress Deparser

  20. Evaluation Setup • Hardware Testbed • ◦ About550linesofp4code • Generated µburst Traffic Traces • µbursts data for “web” and “cache” traffic [IMC ‘17] • Compare BurstRadar against • In-band Telemetry (INT)  dataplane-based solution • “Oracle” Algorithm  ground truth (exact pkts in µbursts) BurstRadar Prototype Send/Receive µburst Traffic

  21. Efficiency 5 5% RTT  10 times less packets compared to INT Note: 5% RTT 1.25µs of queuing @10Gbps in our testbed

  22. Precise Time-synchronization using Programmable Switching ASICsACM SOSR 2019 (Best Paper) PraveinGovindan Kannan, Raj Joshi & Mun Choon Chan

  23. Time Synchronization in Data Center milliseconds NTP 10s of ns to us PTP Server Server CPU CPU Switch CPU NIC NIC PHY PHY PHY PHY Queues Network Delays & Jitter affect accuracy!! Clock Drifts upto 30µs/sec [HUYGENS ’18]

  24. Portable Switch Architecture High Precision Hardware Timestamps in the Processing Pipeline

  25. Line-rate traffic along the direction of the response packet.

  26. Conclusion • Two applications that exploit data plane programmability to demonstrate the potential of modern programmable ASICs • BurstRadar: characterize microbursts at multi-gigabit line rates in high-speed datacenter networks. • DPTP: precise time synchronization protocol running in the network data-plane. • Future Work: enable new monitoring frameworks, control paradigms, virtualization strategies and speedup of large scale distributed computations.

More Related