1 / 33

Lecture 8: Testbeds

Lecture 8: Testbeds. Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net. References. EmuLab : artifact-free, auto-configured, fully controlled A configurable Internet emulator

Download Presentation

Lecture 8: Testbeds

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net

  2. References • EmuLab : artifact-free, auto-configured, fully controlled • A configurable Internet emulator • 2001: 200 nodes, 500 wires, 2x BFS (switch) • 2006: 350 PCs, 7 IXPs, 40 WANodes, 27+ 802.11nodes • PlanetLab : real environment • GENI • 670 machines spanning 325 sites and 35 countries • nodes within a LAN-hop of > 3M users • Supports distributed virtualization • each of 600+ network services running in their own slice

  3. Emulab philosophy • Live-network experimentation • Achieves realism • Surrenders repeatability • e.g., MIT “RON” testbed, PlanetLab • Pure emulation • Introduces controlled packet loss and delay • Requires tedious manual configuration • Emulab approach • Brings simulation’s efficiency and automation to emulation • Artifact free environment • Arbitrary workload: any OS, any ”router” code, any program, for any user • So default resource allocation policy is conservative: • allocate full real node & link: no multiplexing; assume max. possible traffic

  4. Emulab • Allow experimenter complete control, i.e., bare hardware with lots of tools for common cases • OS’s, disk loading, state mgmt tools, IP, traffic generation, batch, ... • Virtualization • of all experimenter-visible resources • topology, links, software,node names, network interface names, network addresses • Allows swapin/swapout • Remotely accessible • Persistent state maintenance (in database) • Separate control network • Configuration language: ns

  5. A B DB Experiment Life Cycle Global Resource Allocation Node Self-Configuration Experiment Control Specification Swap Out Parsing Swap In $ns duplex-link $A $B 1.5Mbps 20ms A B A B

  6. assign:Mapping Local Cluster Resources • Maps virtual resources to local nodes and VLANs • General combinatorial optimization approach to NP-complete problem • Based on simulated annealing • Minimizes inter-switch links, # switches & other constraints … • All experiments mapped in less than 3 secs [100 nodes] • WANassign for Mapping Global Resources (uses genetic algorithm)

  7. Frisbee:Disk Loading • Loads full disk images (bulk download) • Performance techniques: • Overlaps block decompression and device I/O • Uses a domain-specific algorithm to skip unused blocks • Delivers images via a custom reliable multicast protocol • 13 GB generic IDE 7200 rpm drives • Was 20 minutes for 6 GB image • Now 88 seconds

  8. IDE planned for Emulab • Evolve Emulab to be the network-device-independent control and integration center for experimentation, research, development, debugging, measurement, data management, and archiving • Collaboratory: Emulab’s project abstraction • Workbench: Emulab’s experiment abstraction • Device-independent: Emulab’s builtin abstractions for all things network-related

  9. Collaboratory Subsystems • Source repository:Sourceforge, CVS, Subversion • Datapository • “My Wikis” • Mailing list(s) • Bug database • Chat/IM, chatroom management • Moodle? • Approach • Transparently do authentication, authorization and membership mgmt: “single signon” • Use separate server for information and resource security and management • Support flexible access policies: default is project-private, but project leader can change, per-subsytem • Private, public read-only, public read/write

  10. Experimentation Workbench • Four types: • Workflow management (processes), including • Measurement and feedback steps • mandatory pipelines • Experiment management • Data management • Analyses

  11. Workbench: “Time Travel” and Stateful Swapout • Time-travel of distributed systems for debugging • Generalize disk image format and handling • Periodic disk checkpointing • Full state-save on swapout • Xen-based virtual machines • Challenge: network state (packets in flight) • Pragmatic approach: quiesce senders, flush buffers • Stateful swapout/swapin [easier] • Allows transparent pre-emption experiment • Related to workbench: history, tree traversal • Can share some mechanisms, some UI

  12. Planetlab: Requirements • It must provide a global platform that supports both short-term experiments and long-running services. • services must be isolated from each other • multiple services must run concurrently • must support real client workloads • Key Ideas • Slices • Virtualization • multiple architectures on a shared infrastructure • Programmable • virtually no limit on new designs • Opt-in on a per-user / per-application basis • attract real users • demand drives deployment / adoption

  13. PlanetLab: Slices

  14. Slices

  15. Slices

  16. User Opt-in Client Server NAT

  17. Auditing service Monitoring services Brokerage services Provisioning services Linux kernel (Fedora Core) + Vservers (namespace isolation) + Schedulers (performance isolation) + VNET (network virtualization) Virtualization: Per Node View Node Mgr Owner VM VM1 VM2 VMn … Virtual Machine Monitor (VMM)

  18. … … Global View PLC

  19. Requirements • It must be available now, even though no one knows for sure what “it” is • deploy what we have today, and evolve over time • make the system as familiar as possible (e.g., Linux) • accommodate third-party management services

  20. Bind(slice, pool) VM … User BuyResources( ) (broker contacts relevant nodes) Brokerage Service . . . PLC (SA) NM VM VM VM VMM Broker . . .

  21. 2 4 3 1 Requirements • Convince sites to host nodes running code written by unknown researchers from other organizations. • protect the Internet from PlanetLab traffic • must get the trust relationships right • trusted intermediary: PLC Service Developer (User) Node Owner PLC 1) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust PLC to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure

  22. Requirements • Sustaining growth depends on support for site autonomy and decentralized control • sites have final say over the nodes they host • must minimize (eliminate) centralized control • Owner autonomy • owners allocate resources to favored slices • owners selectively disallow unfavored slices • Delegation • PLC grants tickets that are redeemed at nodes • enables third-party management services • Federation • create “private” PlanetLabs using MyPLC • establish peering agreements

  23. Requirements • It must scale to support many users with minimal resources available • expect under-provisioned state to be the norm • shortage of logical resources too (e.g., IP addresses) • Decouple slice creation and resource allocation • given a “fair share” by default when created • acquire additional resources, including guarantees • Fair share with protection against thrashing • 1/Nth of CPU • 1/Nth of link bandwidth • owner limits peak rate • upper bound on average rate (protect campus bandwidth) • disk quota

  24. GENI Design • Key Idea • Slices embedded in a substrate of networking resources • Two central pieces • Physical network substrate • expandable collection of building block components • nodes / links / subnets • Software management framework • knits building blocks together into a coherent facility • embeds slices in the physical substrate

  25. National Fiber Facility

  26. + Programmable Routers

  27. + Clusters at Edge Sites

  28. + Wireless Subnets

  29. + ISP Peers MAE-West MAE-East

  30. Dynamic Configurable Switch Closer Look Sensor Network backbone wavelength backbone switch Customizable Router Internet Edge Site Wireless Subnet

  31. Summary of Substrate • Node Components • edge devices • customizable routers • optical switches • Bandwidth • national fiber facility • tail circuits (including tunnels) • Wireless Subnets • urban 802.11 • wide-area 3G/WiMax • cognitive radio • sensor net • emulation

  32. Management Framework Management Services • name space for users, slices, & components • set of interfaces (“plug in” new components) • support for federation (“plug in” new partners) GMC Substrate Components

  33. CM CM Virtualization SW Virtualization SW Substrate HW Substrate HW GENI Management Core (GMC) Slice Manager GMC Resource Controller Auditing Archive node control sensor data CM Virtualization SW Substrate HW

More Related