1 / 30

Supercharged Planetlab Platform GENI Experimenters’ Workshop – 6/2010

Supercharged Planetlab Platform GENI Experimenters’ Workshop – 6/2010. Jon Turner Applied Research Lab Computer Science & Engineering Washington University www.arl.wustl.edu. Shameless Plug. This talk is just a teaser

wyman
Download Presentation

Supercharged Planetlab Platform GENI Experimenters’ Workshop – 6/2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Supercharged Planetlab PlatformGENI Experimenters’ Workshop – 6/2010 Jon Turner Applied Research LabComputer Science & EngineeringWashington Universitywww.arl.wustl.edu

  2. Shameless Plug • This talk is just a teaser • To really learn how to use the SPPs, come to our tutorial at GEC-8 (Thursday, 7/22) • more detailed presentations • live demonstrations • supervised hands-on session • If you can’t do that • read the online tutorial at wiki.arl.wustl.edu/spp • get an account at spp_plc.arl.wustl.edu • start playing

  3. SPP Nodes CP GPE GPE NPE • SPP is a high performance overlay hosting platform • Designed to be largely compatible with PlanetLab • How it’s different from PlanetLab • multiple processors, including an NP-blade for slice fastpaths • multiple 1 GbE interfaces • support for advance reservation of interface bw, NP resources netFPGA ExternalSwitch Chassis Switch Line Card 10x1 GbE

  4. SPP Deployment in Internet 2 2 2 2 two more nodesHouston and Atlanta later this year

  5. Washington DC Installation

  6. Details I2 Internet Service I2 Router I2 Router I2 Router SPP_PLC SPP SPP SPP WASH SALT KANS spp_plc.arl.wustl.edu ProtoGENI ProtoGENI ProtoGENI I2 Optical Connections

  7. Hosting Platform Details VM VM . . . CP GPE GPE NPE PLOS netFPGA General PurposeProcessing Engine ExternalSwitch Chassis Switch Line Card filter filter Line Card queues lookup parse headerformat . . . . . . 10x1 GbE ... ... ... Network Processing Engine

  8. Key Control Software Components • System Resource Manager (SRM) • runs on Control Processor • retrieves slice definitions from SPP-PLC • manages all system-level resources and reservations • Resource Management Proxy RMP) • runs on GPEs (in root vServer) • provides API for user slices to access resources • Slice Configuration tool (scfg) • command-line interface to RMP • Substrate Control Daemon (SCD) • runs on NPE management processor • supports NPE resource configuration, statistics reporting

  9. Application Framework Remote Login Interface • Fastpath/slowpath • fastpath mapped onto NPE • control daemon in vServeron GPE • Configurable elements • code option – determines how packets processed by parse, header format • fastpath interfaces • map to physical interface • provisioned bandwidth • TCAM filters • Queues • length, bandwidth • Control daemon can configure fastpath through RMP • or users can configure manually with scfg controldaemon(in vServer) GPE exception packets& in-band control exception packets& in-band control Control Interface Filters Parse Lookup HeaderFormat outputinterfaces ... ... inputinterfaces ... QueueManager Fast Path

  10. Working with SPPs • Define new slice using SPP-PLC • just like PlanetLab • Login to SPPs in slice to install application code • Reserve resources needed for your experiment • includes interface bandwidth on external ports and NPE fastpath resources • To run experiment during reserved period • “claim” reserved resources • setup slowpath endpoints • configure fastpath (if applicable) • setup real-time monitoring • run application and start traffic generators

  11. http://wiki.arl.wustl.edu/index.php /Internet_Scale_Overlay_Hosting http://wiki.arl.wustl.edu/index.php /The_SPP_Tutorial Web Site and SPP_PLC spp_plc.arl.wustl.edu

  12. Creating a Slice SPP_PLC CP GPE GPE NPE SRM netFPGA ExternalSwitch Chassis Switch Line Card

  13. Preparing a Slice Logging into slice CP GPE GPE NPE SLM netFPGA ExternalSwitch Chassis Switch Requires Network Address Translationdatapath detects new connection, LC control processor adds filters Line Card SFTP connection for downloading code

  14. Configuring a Slowpath Endpoint GPE GPE NPE CP SRM netFPGA RMP ExternalSwitch Chassis Switch LC • Request endpoint with desired port number on specific interface thru Resource Manager Proxy (RMP) • RMP relays request to System Resource Manager (SRM) • SRM configures LC filters for interface • Arriving packets directed to slice, which is listening on socket

  15. Setting Up a Fast Path GPE GPE NPE queues parse lookup headerformat CP SRM ... netFPGA RMP ExternalSwitch Chassis Switch LC • Request fastpath through RMP • SRM allocates fastpath • Specify logical interfaces and interface bandwidths • Specify #of filters, queues, bindingof queues to interfaces, queue lengths and bandwidths • Configure fastpath filters

  16. Displaying Real-Time Data GPE CP NPE GPE SCD Line Card sliced • Fastpath maintains traffic counters and queue lengths • To display traffic data • configure an external TCP port • run sliced within your vServer, using configured port • sliced --ip 64.57.23.194 --port 3552 & • on remote machine, run SPPmon.jar • use provided GUI to setup monitoring displays • SPPmon configures sliced, which polls the NPE running the fastpath • SCD-N reads and returns counter values to sliced • can also display data written to file within your vServer SPPmon

  17. Command Line Tools • scfg– general slice configuration tool • scfg –-cmdmake_resrv (cancel_resrv, get_resrvs,..) • scfg –-cmdclaim_resources • scfg –-info get_ifaces (...) • scfg –-cmdsetup_sp_endpoint (setup_fp_tunnel,..) • same features (and more) available thru C/C++ API • sliced – monitor traffic and display remotely • ip_fpd– fastpath daemon for with IPv4 fastpath • handles exception packets (e.g. ICMP ping) • can extend to add features (e.g. bw reservation option) • ip_fpc – fastpath configuration tool for IPv4 fastpath

  18. Forest Overlay Network client accessconnection • Overlay for real-time distributed applications • large online virtual worlds • distributed cyber-physical systems • Large distributed sessions • endpoints issue periodic status reports • and subscribe to dynamically changing sets of reports • requires real-time, non-stop data delivery, even as communication pattern changes • Per-session provisioned channels (comtrees) • unicast data delivery with route learning • dynamic multicast subscriptions using multicast core server overlayrouter comtree

  19. Unicast Addressing and Forwarding • Every comtree has its own topology and routes • Two level unicast addresses • zip code and endpoint number • Nodes with same zip code form subtree • all nodes in “foreign” zips reached through same branch • Unicast routing • table entry for each foreign zip code and local endpoint • if no route table entry, broadcastand set route request flag • first router with a route responds 1 4 2 5 3.1 3

  20. Multicast Routing coresubtree subscriptionspropagatetowards core • Flat addresses • on per comtree basis • Hosts subscribe to multicastaddresses to receive packets • Each comtree has “core” subtree • multicast packets go to all core routers • subscriptions propagate packets outside core • Subscription processing • propagate requests towards first core router • stop if intermediate router already subscribed • can subscribe/unsubscribe to many multicasts at once • Core size can be configured to suit application

  21. Forest Packet Format • Encapsulated in IP/UDP packet • IP (addr,port#) identify logical interface to Forest router • Types include • user data packets • multicast subscribe/unsubscribe • route reply • Flags include • unicast route request

  22. Basic Demo Setup WASH SALT 64.57.23.* 10.1.7.* • .1 .2 3.100 1.100 .204 .218 .2 .2 10.1.3.* 10.1.1.* .1 .1 3.1 3.3 KANS 1.1 1.3 3.2 1.2 2.100 planetlabnodes .186 laptop 2.1 2.3 2.2

  23. Simple Unicast Demo 1.* 3.* 2.* • Uses one host at each router • Single comtree with root at KANS • Host 1.1 sends to 2.1 and 3.1 • packet to 3.1 flagged with route-request and “flooded” • router 2.100 responds with route-reply • Host 2.1 sends to 1.1 and 3.1 • already has routes to both so no routing updates needed • Host 3.1 sends to 1.1 and 2.1 • packet to 1.1 flagged with route-request and “flooded” • router 2.100 responds with route-reply

  24. forwarding to 3.1and sendingroute-reply forwarding to 3.1and sendingroute-reply use UserData to display data in stats files on GPEs new route from 1.100 (at salt) to 3.0 (at wash) 2.100 receives flagged packet from 1.100 to 3.1 run script to launch Forest routers and hosts sending route-reply new route from 3.100 (at salt) to 1.0 (at wash)

  25. Setting up & Running the Demo • Prepare Forest router configuration files • config info for Forest links, comtrees, routes, statistics • Prepare/save SPPmonconfig – specify charts • Reserve SPP resources • bandwidth on four external interfaces (one for sliced) • Start session • claim reserved resources • setup communication endpoint for router logical interfaces and for sliced to report statistics • start sliced on SPP and SPPmon on laptop • Start Forest routers & hosts, then observe traffic • done remotely from laptop, using shell script

  26. Reservation Script reservation start and end times (mmddhhmm) GMT • On spp, execute reservation 03151000 03152200 #! /bin/bash cat >res_file.xml <<foobar <?xml version="1.0" encoding="utf-8" standalone="yes"?> <spp> <rsvRecord> <rDate start="2010${1}00" end="2010${2}00" /> <plRSpec> <ifParams> <ifRecbw="10000" ip="64.57.23.186" /> ... </ifParams> </plRSpec> </rsvRecord> </spp> foobar scfg --cmdmake_resrv --xfile res_file.xml copy reservation to a file reserve interface bandwidth (4 of these) invoke scfg on reservation file

  27. Setup Script • On spp, execute setup #! /bin/bash # claim reserved resources scfg --cmdclaim_resources # configure interfaces, binding port numbers scfg –cmdsetup_sp_endpoint –bw 10000 –ipaddr 10.1.1.1 --proto 17 –port 30123 scfg –cmdsetup_sp_endpoint –bw 10000 –ipaddr 10.1.3.1 --proto 17 –port 30123 scfg –cmdsetup_sp_endpoint –bw 10000 –ipaddr 64.57.23.186 --proto 17 –port 30123 scfg –cmdsetup_sp_endpoint –bw 2000 –ipaddr 64.57.23.182 --proto 6 –port 3551 # run monitoring daemon cat </dev/null >stats sliced –ip 64.57.23.182 & interfaces to other SPPs “public” interface interface for traffic monitoring

  28. Run Demo • On laptop, run script fdemo1 #! /bin/sh tlim=50 # time limit for hosts and routers (sec) dir=fdemo1 # directory in which code is executed ... # public ip addresses used by forest routers r1ip=64.57.23.218 ... # names and addresses of planetlab nodes used as forest hosts h11host=planetlab6.flux.utah.edu h11ip=155.98.35.7 ... ssh ${sppSlice}@${salt} fRscript ${dir} 1.100 ${tlim} & ... sleep 2 ssh ${plabSlice}@${h11host} fHscript ${dir} ${h11ip} ${r1ip} 1.1 ${rflg} ${minp} ${tlim} & ...

  29. Basic Multicast Demo 1.* 3.* • Uses four hosts per router • one source, three receivers • Comtree centered at each Forest router • each source sends to a multicast group on each comtree • Phase 1 – uses comtree 1 • each receiver subscribes to multicast 1, then 2 and 3;then unsubscribes – 3 second delay between changes • receivers all offset from each other • Phases 2 and 3 are similar • use comtrees 2 and 3, so different topology 2.*

  30. salt to wash traffic salt to kans traffic run script to launch Forest routers and hosts all hosts subscribe in turn to three multicasts on comtree 1

More Related