1 / 1

PlanetLab: A Distributed Test Lab for Planetary Scale Network Services

Phase 0. Q2 ‘02. 2H ‘02. 1H ‘03. 2H ‘03. 1H ‘04. 2H ‘04. Phase 0 Design Spec. 100 Nodes online. PlanetLab Summit Aug-20. Mission Control System Rev 0. Phase 1. 300 Nodes online. Silk: Safe Raw Sockets. thinix API in Linux. Native thinix Mgmt Services. Centralized Mgmt

raquel
Download Presentation

PlanetLab: A Distributed Test Lab for Planetary Scale Network Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Phase 0 Q2 ‘02 2H ‘02 1H ‘03 2H ‘03 1H ‘04 2H ‘04 Phase 0 Design Spec 100 Nodes online PlanetLab Summit Aug-20 Mission Control System Rev 0 Phase 1 300 Nodes online Silk: Safe Raw Sockets thinix API in Linux Native thinix Mgmt Services Centralized Mgmt Service Phase 2 Broader Consortium to take to 1000 Nodes Secure thinix Transition Ops Mgmt to Consortium Continued development of Self-serviced Infrastructure Uppsala UBC Copenhagen UW Cambridge WI Chicago UPenn Amsterdam Harvard Utah Intel Seattle Karlsruhe Tokyo Intel MIT Intel OR Beijing Barcelona Intel Berkeley Cornell CMU ICIR Princeton UCB St. Louis Columbia Duke UCSB Washu KY UCLA GIT Overall Timeline Rice UCSD UT ISI • 33 sites, ~3 nodes each: • Located with friends and family • Running First Design code by 9/01/2002 • 10 of 100 located in manage colo centers, Phase 0 (40 of 300 in Phase 1) • Disciplined roll-out Melbourne Target Phase 0 Sites • People and Organization • Project Owner: Timothy Roscoe (acting) • Hans Mulder (sponsor) • David Culler • Larry Peterson • Tom Anderson • Milan Milenkovic • Earl Hines Core Engineering & Design Team Distributed Design Team Seed Research & Design Community Utah Pittsburg MIT Larger academic, industry, government non-profit organization Berkeley Princeton IR DSL CMU Washington Rice Duke UCSD PlanetLab: A Distributed Test Lab for Planetary Scale Network Services • Opportunities • Emerging “Killer Apps”: • CDNs and P2P networks are first examples • Application spreads itself over the Internet • Vibrant Research Community: • Distributed Hash Tables: Chord, CAN, Tapestry, Pastry • Distributed Storage: Oceanstore, Mnemosyne, Past • Lack of viable testbed for ideas • Synopsis • Open, planetary-scale research and experimental facility • Dual-role: Research Testbed AND Deployment Platform • >1000 viewpoints (compute nodes) on the internet • 10-100 resource-rich sites at network crossroads • Analysis of infrastructure enhancements, • Experimentation with new applications and services • Typical use involves a slice across substantial subset of nodes • The Hard Problems • “Slice-ability”: multiple experimental services sharing many nodes • Security and Integrity • Management • Building Blocks and Primitives • Instrumentation • What will PlanetLab enable? • The open infrastructure that enables the next generation of planetary scale services to be invented • Post-cluster, post-yahoo, post-CDN, post-P2P, ... • Potentially, the foundation on which the next Internet can emerge • A different kind of testbed • Focus and Mobilize the Network / Systems Research Community • Position Intel to lead in the emerging Internet Project Strategy • Minimal VM / Auth. requirements • Applications mostly self-sufficient • Core team manages platform • New Ideas / Opportunities • Service-centric Virtualization • Re-emergence of Virtual Machine technology (VMWare…) • Sandboxing to provide virtual servers (Ensim, UML, Vservers) • Network Services require fundamentally simpler virtual machines, making them more scalable (more VMs per PM), focussed on service requirements. • Instrumentation and Management become further virtualized “slices” • Restricted API => Simple Machine Monitor • Very simple monitor => push complexity up to where it can be managed • Ultimately one can only make very tiny machine monitor truly secure • SILK effort (Princeton) captures most valuable part of ANets nodeOS in Linux kernel modules • API should self-virtualize: deploy the next PlanetLab within the current one • Host V.1 planetSILK within V.2 thinix VM • Planned Obsolescence of Building Block Services • Community-driven service definition and development • Service components on node run in just another VM • Team develops bootstrap ‘global’ services 2003 2004 2005 • Linux variant with constrained API • Bootstrap services hosted in VMs • Outsource operational support V.0: seed 100 nodes V.1: get API & interfaces right Grow to 300 nodes V.2: get underlying arch. and impl. Right Grow to 1000+ nodes • Thin, event-driven monitor • Host new services directly • Host phase 1 as virtual OS • Replace bootstrap services

More Related