1 / 17

Prototype Deployment of Internet Scale Overlay Hosting

Prototype Deployment of Internet Scale Overlay Hosting. Patrick Crowley and Jon Turner and John DeHart, Mart Haitjema Fred Kuhns, Jyoti Parwatikar, Ritun Patney, Charlie Wiseman, Mike Wilson, Ken Wong, Dave Zar Computer Science & Engineering Washington University www.arl.wustl.edu.

mary
Download Presentation

Prototype Deployment of Internet Scale Overlay Hosting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Prototype Deployment of Internet Scale Overlay Hosting Patrick Crowley and Jon Turnerand John DeHart, Mart Haitjema Fred Kuhns, Jyoti Parwatikar, Ritun Patney, Charlie Wiseman, Mike Wilson, Ken Wong, Dave ZarComputer Science & EngineeringWashington Universitywww.arl.wustl.edu

  2. Project Objectives Control Processor External Switch CP GPE GPE NPE • Deploy five experimental overlay hosting platforms • located at Internet 2 PoPs • compatible with PlanetLab, moving to GENI control framework • performance characteristics suitable for service deployment • integrated system architecture with multiple server blades • shared NP-based server blades for fast-path packet processing • Demonstrate multiple applications netFPGA ExternalSwitch Chassis Switch General Purpose Processing Engines Network Processing Engine Line Card Line Card Chassis Switch 10x1 GbE

  3. Role in Cluster B • Overlay Hosting Platform • Deployed at 5 locations • Serve as backbone linking cluster partners (and others) • Demonstrate high quality service delivery on overlays • Managed using experiment support tools

  4. Target Internet 2 Deployment 2 2 3 2 2 3 3

  5. Hosting Platform Details VM VM . . . CP GPE GPE NPE PLOS netFPGA General PurposeProcessing Engine ExternalSwitch Chassis Switch Line Card filter filter Line Card queues lookup parse headerformat . . . . . . 10x1 GbE ... ... ... Network Processing Engine

  6. Application Framework Remote Login Interface • Fastpath/slowpath • fastpath mapped onto NPE • slice manager in vServeron GPE • Configurable elements • code option – determines how packets processed by parse, header format • logical interfaces • may be public or tunnel • guaranteed bandwidth • TCAM filters • Queues • length, bandwidth • Slice manager can configure fastpath using provided library • or manually, using command line interface SliceManager(in vServer) GPE exception packets& in-band control exception packets& in-band control Control Interface Filters Parse Lookup HeaderFormat outputinterfaces ... ... inputinterfaces ... QueueManager Fast Path

  7. Creating a Slice PLC CP GPE GPE NPE SRM netFPGA ExternalSwitch Chassis Switch Line Card

  8. Starting up a Slice Logging into slice CP GPE GPE NPE SLM netFPGA ExternalSwitch Chassis Switch Requires Network Address Translationdatapath detects new connection, LC control processor adds filters Line Card SFTP connection for downloading code

  9. Configuring an External Port GPE GPE NPE CP SRM netFPGA RMP ExternalSwitch Chassis Switch LC • Request external port number on specific interface thru Resource Manager Proxy (RMP) • RMP relays request to System Resource Manager (SRM) • SRM configures LC filters for interface • Arriving packets directed to slice, which is listening on socket

  10. Setting Up a Fast Path GPE GPE NPE queues parse lookup headerformat CP SRM ... netFPGA RMP ExternalSwitch Chassis Switch LC • Request fastpath through RMP • SRM allocates fastpath • Specify logical interfaces and interface bandwidths • Specify #of filters, queues, bindingof queues to interfaces, queue lengths and bandwidths • Configure fastpath filters

  11. Planned Wide-Area OpenFlow hdrFmt lookup queue parse CP NOX netFPGA NOX Princeton to SPP WashU Stanford 2 2 3 2 2 GaTech 3 3 Texas

  12. ME ME Loc. Mem. threadcontexts NP Blade . . . SRAM SDRAM TC TC ALU NP A MemInt ME . . . RTM ExternalInterfaces TCAMInt xScale input output TCAM SPI Switch FIC SwitchInterface NP B

  13. SRAM SRAM DRAM Line Card Datapath DRAM SRAM SRAM QueueManager (4 ME) Lookup (2 ME) Hdr Format (1 ME) TxIn (2 ME) RxIn (2 ME) Key Extract (2 ME) ingress side TCAM external interfaces switch interface egress side QueueManager (4 ME) FlowStats (2 ME) TxEg (2 ME) Hdr Format (1 ME) Lookup (2 ME) Key Extract (1 ME) RxEg (2 ME) SRAM • Filter/route and rate-control traffic • Network Address Translation for outgoing flows • Record traffic statistics for all outgoing flows

  14. NPE Datapath (version 1) DRAM • Parse and Header Format include slice-specific code • parse extracts header fields to form lookup key • Hdr Format does any required post-lookup processing • Lookup uses opaque key for TCAM lookup • Multiple static code options can be supported • multiple slices per code option • each has own interfaces, filters, queues and private memory SRAM SRAM SRAM QueueManager (4 ME) Substr. Decap (1 ME) Lookup (1 ME) Hdr Format (1 ME) Tx (2 ME) Rx (2 ME) Parse (1 ME) TCAM SRAM

  15. NPE Datapath (Version 2) SRAM SRAM Decap, Parse, Lookup, AddShim (8 MEs) Rx (2 ME) Tx (2 ME) from switch TCAM SPI Switch Tx (2 ME) HdrFmt (4 MEs) QueueManager (4 MEs) Lookup& Copy (2 ME) Rx (2 ME) to switch SRAM SRAM SRAM • Use both NPs, enabling 10 Gb/s throughput • Integrated Decap,Parse,Lookup uses MEs more efficiently • Multicast supported by substrate

  16. Project Plan Highlights • Initial deployment • first two (maybe three) nodes deployed by mid 2009 • shooting for v2 of NPE software • PlanetLab control with local resource allocation • GENI-compatible control software • implement component manager interface • resource allocation using rspecs/tickets • Working with users • online and hands-on tutorials • collaborating with users on new code options • Completing deployment • final nodes deployed in late 2010 • complete support for netFPGA

  17. Looking Ahead • Bad news • slow market for ATCA means high cost, limited support • Intel dropped IXP and Radisys discontinuing IXP blades • Good news • ATCA market now projected to grow rapidly and become more cost-competitive (10x growth over 3 years) • new NPs and NP blades • Netronome 3200 – IXP successor with 40 microengines • CaviumOcteon, RMI XLR732 – MIPS-based, uses cache • can also assemble systems from commodity components • 10 GbE switches now at $400-500 per port • conventional rack-mount servers with 8-16 processor cores • NPs and FPGAs on lower cost PCI-express cards

More Related