1 / 38

A blueprint for introducing disruptive technology into Internet

A blueprint for introducing disruptive technology into Internet. by L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion leader Michael Wilson Mar 3, 2005 CS7702 Research seminar.

connie
Download Presentation

A blueprint for introducing disruptive technology into Internet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A blueprint for introducing disruptive technology into Internet by L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion leader Michael Wilson Mar 3, 2005 CS7702 Research seminar

  2. Outline • Introduction • Architecture • PlanetLab • Conclusion

  3. Introduction Recently: Until recently: • Widely-distributed applications make own forwarding decisions • Network-embedded storage, peer-to-peer file sharing, content distribution networks, robust routing overlays, scalable object location, scalable event propagation • Network elements (layer-7 switches & transparent caches) do application-specific processing But Internet is ossified.. Internet Figures courtesy planet-lab.org

  4. This paper proposes using overlay networks to achieve it..

  5. Overlay network • A virtual network of nodes & logical links, built atop existing network, to implement a new service • Provides opportunity for innovation as no changes in Internet • Eventually, ‘weight’ of these overlays will cause emergence of new architecture • Similar to Internet itself (an overlay) causing evolution of underlying telephony network This paper speculates what this new architecture will look like.. Figure courtesy planet-lab.org

  6. Outline • Introduction • Architecture • PlanetLab • Conclusion

  7. Goals • Short-term: Support experimentation with new services • Testbed • Experiment at scale (1000s of sites) • Experiment under real-world conditions • diverse bandwidth/ latency/ loss • wide-spread geographic coverage • Potential for real workloads & users • Low cost of entry • Medium-term: Support continuous services that serve clients • Deployment platform • support seamless migration of application from prototype to service, through design iterations, that continues to evolve • Long-term: Microcosm for next generation Internet!

  8. Architecture Design principles • Slice-ability • Distributed control of resources • Unbundled (overlay) management • Application-centric interfaces

  9. Slice-ability • A slice is horizontal cut of global resources across nodes • Processing, memory, storage.. • Each service runs in a slice • Service is a set of programs delivering some functionality • Node slicing must • be secure • use resource control mechanism • be scalable Slice ~ a network of VMs Figure courtesy planet-lab.org

  10. Virtual Machine • VM is the environment where a program implementing some aspect of the service runs • Each VM runs on a single node & uses some of the node’s resources • VM must be • No harder to write programs, protection from other VMs, fair sharing of resources, restriction of traffic generation • Multiple VMs run on each node with • VMM (Virtual Machine Monitor) arbitrating node’s resources

  11. Virtual Machine Monitor (VMM) • a kernel-mode driver running in the host operating system • Has access to the physical processor & manages resources between host OS & VMs • prevents malicious or poorly designed applications running in virtual server from requesting excessive hardware resources from the host OS • With virtualization, two interfaces now • API for typical services & • Protection Interface used by VMM • VMM used here is Linux VServer..

  12. A node.. Figure courtesy planet-lab.org

  13. Across nodes (ie. across network) • Node manger (one per node; part of VMM) • When service managers provide valid tickets • Allocates resources, creates VMs & returns a lease • Resource Monitor (one per node) • Tracks node’s available resources (using VM’s interface) • Tells agents about available resources • Agents (centralized) • Collect resource monitor reports • Advertise tickets • Issue tickets to resource brokers • Resource Broker (per service) • Obtain tickets from agents on behalf of service managers • Service Managers (per service) • Obtain tickets from broker • Redeem tickets with node managers to create VM • Start service

  14. Obtaining a Slice Agent Broker Service Manager Courtesy Jason Waddle’s presentation material

  15. Obtaining a Slice Agent Broker Resource Monitor Service Manager Resource Monitor Resource Monitor Courtesy Jason Waddle’s presentation material

  16. Obtaining a Slice Agent ticket Broker ticket Resource Monitor ticket Service Manager Resource Monitor Resource Monitor Courtesy Jason Waddle’s presentation material

  17. Obtaining a Slice Agent ticket Broker ticket ticket Service Manager Courtesy Jason Waddle’s presentation material

  18. Obtaining a Slice Agent ticket Broker ticket ticket Service Manager Courtesy Jason Waddle’s presentation material

  19. Obtaining a Slice Agent ticket Broker ticket ticket Service Manager Courtesy Jason Waddle’s presentation material

  20. Obtaining a Slice Agent ticket Broker Service Manager ticket ticket Courtesy Jason Waddle’s presentation material

  21. Obtaining a Slice Agent ticket Broker Service Manager Node Manager ticket Node Manager ticket Courtesy Jason Waddle’s presentation material

  22. Obtaining a Slice Agent ticket Broker Service Manager Courtesy Jason Waddle’s presentation material

  23. Obtaining a Slice Agent ticket Broker Service Manager Courtesy Jason Waddle’s presentation material

  24. Architecture Design principles • Slice-ability • Distributed control of resources • Unbundled(overlay) management • Application-centric interfaces

  25. Distributed control of resources • Because of dual role of testbed, two types of users • Researchers • Likely to dictate how services are deployed & • Node properties • Node owners/ clients • Likely to restrict what services run on their nodes & how resources are allocated to them • De-centralize control between the two • Central authority provides credentials to service developers • Each node independently grants or denies a request, based on local policy

  26. Architecture Design principles • Slice-ability • Distributed control of resources • Unbundled (overlay) management • Application-centric interfaces

  27. Unbundled (overlay) management • Independent sub-services, running in own slices • discover set of nodes in overlay & learn their capabilities • monitor health & instrument behavior of these nodes • establish a default topology • manage user accounts & credentials • keep software running on each node up-to-date & • extract tracing & debugging info from a running node • Some are part of core system (user a/c..) • Single, agreed-upon version • Others can have alternatives, with a default, replaceable over time • Unbundling requires appropriate interfaces Eg. hooks in VMM interface to get status of each node’s resources • Sub-services may depend on each other Eg. resource discovery service may depend on node monitor service

  28. Architecture Design principles • Slice-ability • Distributed control of resources • Unbundled (overlay) management • Application-centric interfaces

  29. Application-centric interfaces • Promote application development by letting it run continuously (deployment platform) • Problem: difficult to simultaneously create testbed & use it for writing applications • API should remain largely unchanged while underlying implementation changes • If alternative API emerges, new applications must be written to it but original should be maintained for legacy applications

  30. Outline • Introduction • Architecture • PlanetLab • Conclusion

  31. PlanetLab Phases of evolution • Seed phase • 100 centrally managed machines • Pure testbed (no client workload) • Researchers as clients • Scale testbed to 1000 sites • Continuously running services • Attracting real clients • Non-researchers as clients

  32. PlanetLab today Services • Berkeley’s OceanStore – RAID distributed over Internet • Intel’s Netbait – Detect & track worms globally • UW’s ScriptRoute – Internet measurement tool • Princeton’s CoDeeN – Open content distribution network Courtesy planet-lab.org

  33. Related work • Internet2 (Abilene backbone) • Closed commercial routers -> no new functionality in the middle of network • Emulab • Not a deployment platform • Grid (Globus) • Glues together modest number of large computing assets with high bandwidth pipes but • planetlab emphasizes on scaling the less bandwidth applications across wider collection of nodes • ABONE (from active networks) • Focuses on supporting extensibility of forwarding function but • planetlab is more inclusive ie. apps throughout the network including those involving storage component • XBONE • Supports IP-in-IP tunneling, w/ GUI for specific overlay configurations • Alternative: package as desktop application Eg. Napster, KaZaa • Needs to be immediately & widely popular • Difficult to modify system once deployed unless compelling applications • Not secure • KaZaa exposed all files on local system

  34. Conclusion • An open, global network test-bed, for pioneering novel planetary-scale services (deployment). • A model for introducing innovations (service-oriented network architecture) into the Internet through overlays. • Whether a single winner emerges & gets subsumed into Internet or services continue to define their own routing, remains a subject of speculation..

  35. References • PlanetLab: An overlay testbed for broad-coverage services by B. Chun et. al., Jan 2003

  36. Backup slides

  37. Overlay construction problems • Dynamic changes in group membership – Members may join and leave dynamically – Members may die • Dynamic changes in network conditions and topology – Delay between members may vary over time due to congestion, routing changes • Knowledge of network conditions is member specific – Each member must determine network conditions for itself

  38. Testbed’s mode of operation as deployment platform

More Related