1 / 13

PlanetLab Architecture

PlanetLab Architecture. Larry Peterson Princeton University. Roadmap. Yesterday Code + Design Principles Today Defined Architecture + Standardization Process Tomorrow Clusters Federation ISP (overlays on layer 2 networks). Meta-Issue. Reference Model describes PlanetLab-like systems

magnar
Download Presentation

PlanetLab Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PlanetLab Architecture Larry Peterson Princeton University

  2. Roadmap • Yesterday • Code + Design Principles • Today • Defined Architecture + Standardization Process • Tomorrow • Clusters • Federation • ISP (overlays on layer 2 networks)

  3. Meta-Issue • Reference Model • describes PlanetLab-like systems • Architecture • narrow waist (universal agreement) • by convention • Implementation • what we happen to run today • alternatives possible tomorrow

  4. PlanetLab Nodes Service Providers Owner 1 Owner 2 Owner 3 Owner N U S E R S Slice Authority Request a slice Create slices New slice ID Identify slice users (resolve abuse) Learn about nodes Auditing data Management Authority . . . Software updates . . . Access slice Principals

  5. node database MA Owner VM NM + VMM Node Owner Service Provider VM SCS slice database SA Architectural Elements Node

  6. Architecture vs Implementation • Linux: implementation • well-defined VM types (default on all nodes?) • VM template (keys, bootscript) • Node Manager: narrow waist • VMM-specific implementation of common interface (rspec) • stacked vs flat? • pl_conf: architecture by convention • supports remote interface/protocol (ticket) • depends on name space for SAs (narrow waist) • PLC-as-MA: implementation • independent MAs real soon now • share fate for foreseeable future • PLC-as-SA: implementation • advantage of common slice authority • decouple naming from slice creation

  7. PlanetLab Compliant • Node • support node manager interface • pl_conf (accepts PLC-as-SA tickets) • at least one known VM type (base type?) • owner VM to make root allocation decision (ops on NM) • audit service • Management Authority • secure boot • audit service • responsive support team • Slice Authority • creates slices and/or returns tickets • auditing capability

  8. Breakout Session • Questions • Group C’s notes

  9. Questions • What challenges do we face in extending PlanetLab to support: • clusters • autonomous regions (e.g., EU, Japan) • private PlanetLabs • What is the solution space for these problems? • How do these solutions affect the PlanetLab architecture? • What roadmap gets us to where we want to be without breaking anything?

  10. Challenges (1) • Requires IP address per node (in the DB) • Have to NAT on shared machines • Mobility • Node Owner - what’s the interface? • Keep resources private • Fine-grain control for exceptional cases • Enable select services the owner wants to run • Enable “side agreements” • Opportunistically exploit available capacity • Desktops • Unused cluster nodes • Perhaps largely motivated by private PL’s (Condor) • Value is providing a consistent base level (the VM) • Users may want to name sites, not nodes • Provide incentives to make more nodes available public PL • Dedicated machines • Opportunistic nodes (from cluster)

  11. Challenges (2) • When private and public PL meet… • Private VMs come and go on short notice • Policy/usage problems (PL currently in DMZ) • Clusters easier than desktops • Sometimes public and sometimes private • Runs slices from both local and public SA • Owner has to be able to specify how much to commit to each • Need incentives to provide resources to public PL • Private vs Regional • One public PL and many Private PLs, all federating • Who manages a site’s public nodes? • Public MA if dedicates nodes (e.g., PLC continues to manage) • Owner retains right to make root resource allocation decision • Sites may be happy to let PLC manage their nodes (business model!) • Private MA if exploiting a dynamic setting (e.g., cluster) • In the limit PLC manages no nodes (just a public research SA) • PLC needs to learn set of available nodes (MA has an interface to export) • Some ISP-like entity manages the nodes on a set of sites’ behalf

  12. Challenges (3) • Incentives • Markets • Policies (short-term fixes) • Change node-to-slice ratio • Measure aggregate cpu/net usage (course grained) • Account for benefit provided to the local site (true cost) • Account for free bandwidth (e.g., Internet2) • Contribute additional nodes and additional bandwidth • Risk: reigning in heavy users causes light users to contribute less • May need exchange rate between bw and cpu… markets • Mandate use of admission control at crunch time • Could be an exception to the • Allow site-specific “side” agreements • Public PL is just one “side” agreement • PLC can’t mediate all side agreements (implementation limit) • A new SA could mediate side agreements between a set of sites • Other consortiums form (virtual organizations)

  13. Challenges (4) • Wireless • Multiple interfaces (not unique to wireless) • Schedule in time since not virtualizable at MAC level • Incentives to provide access to unique capability (e.g., WiMax) • Side agreements with other wireless sites • Fold into the incentive mechanism • Flea market for virtual organizations • Management as system scales • Help sites better manage their nodes (be more responsive) • Generally, need better management tools (at PLC too) • ckong@cs.princeton.edu

More Related