1 / 20

Tapestry: A Resilient Global-Scale Overlay for Service Deployment

Tapestry: A Resilient Global-Scale Overlay for Service Deployment. Ben Y.Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph Yong Song ( ysong@sslab.kaist.ac.kr ). Contents. Introduction Tapestry Routing and Object-Locating Scheme Node Insertion Node Deletion

arnold
Download Presentation

Tapestry: A Resilient Global-Scale Overlay for Service Deployment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tapestry: A Resilient Global-Scale Overlay for Service Deployment Ben Y.Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph Yong Song ( ysong@sslab.kaist.ac.kr )

  2. Contents • Introduction • Tapestry • Routing and Object-Locating Scheme • Node Insertion • Node Deletion • Architecture of infrastructure • Experimental Result • Conclusion

  3. Introduction • Peer-to-Peer system • A system of sharing resources between participating nodes • Centralized System • Central server (ex. Napster) • Decentralized System • No central server (ex. Gnutella, CAN, Chord, Pastry) • Each node can be a server, a client and a router • OceanStore Project • A global persistent data store designed to scale to billions of users • It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers • Need for locality • Tapestry is well-suited

  4. Tapestry • A peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources • An extensible infrastructure that provides decentralized object location and routing • Tapestry exploit locality in routing messages • Provides API to P2P application developers • Maintain network integrity under dynamic network conditions

  5. Tapestry – DOLR API • PublishObject(OG,Aid) : Publishes Object O on the local node • UnpublishObject(OG,Aid) : Removes location mappings for O • RouteToObject(OG,Aid) : Routes message to location of O with GUID • RouteToNode(N,Aid,Exact) : Routes messages to application Aid on node N Nid : NodeID (uniformly assigned at random from a large identifier space) OG : Object GUID (selected from the same identifier space) Aid : application-specific identifier * Multiple applications can share a single large Tapestry Overlay Network

  6. Object Location Pointers <ObjID, NodeID> Routing for an object with 435A Object Store Routing for an object with 1A3B Back pointers Tapestry – a Node • Maintains a routing table consisting of nodeIDs and IP addresses of neighbors of the local node • Forwarding to nodes whose nodeIDs are closer to identifier of an object (Matching larger prefix) 40 Hex digit created by hashing algorithm L1 L2 L3 L4

  7. Tapestry – Mesh and Routing • Mesh & Routing example • From 5230 to 42AD across Tapestry

  8. Tapestry – Publication and Location • Each node along the publication path stores a pointer mapping <OG,S>

  9. 42A3 Ack multicast Tapestry – Node Insertion • Inserting a new node N into Tapestry network • Notify need-to-know nodes of N, N fills null entries in their routing tables • Move locally rooted object references to N • Acknowledged multicast • Add N to their routing table • Transfer references of locally rooted pointers • Construct locally optimal routing table for N • Notify nearby nodes to N for optimization

  10. Tapestry – Deletion • Voluntary node deletion • Using backpointers, pointed nodes update their own table with a replacement node • The replacement node republishes • Node N routes references locally rooted objects to their new roots • Involuntary node deletion • Failure-prone network such as Internet • Periodic beacon to detect outgoing link and node failures • Republishing of object references

  11. According to arrival or departure of neighbors, add or remove object pointers Update Update Continuous link monitoring : fault detection, latency, loss rate estimation Tapestry Architecture DELIVER (Gid, Aid, Msg) FORWARD (Gid, Aid, Msg) ROUTE (Gid,Aid,Msg,NextHopNode) Examine destination GUID of messages Determine their next hop from routing table and local object pointers TCP/IP, UDP/IP

  12. Experiment • Environment • Local cluster, PlanetLab, Simulator(SOSS) • Micro-benchmarks on local cluster • Message processing overhead • Proportional to processor speed - Can utilize Moore’s Law • Message throughput • Optimal size is 4KB • Implemented by Java

  13. Experiment Result(1) • Efficiency (routing overhead) • RDP • the ratio of distance traveled via Tapestry location and routing, versus that traveled via direct routing to the object.

  14. Experiment Result(2) • Object Location Optimization • With additional pointers • k Backup nodes of the next hop of the publish path • nearest l neighbors of current hop • applying them along the first m hops of the path • 1092 nodes, 25 objects

  15. Experiment Result(3) • Scalability • Repeatedly insertion and deletion of a single node 20 times • Integration latency and bandwidth

  16. Experiment Result(4) • Resilience against network dynamics 20 times

  17. Experiment Result(5) • Resilience against network dynamics Churn 1 : poisson process with average interarrival time of 20s average life time : 4min Churn 2 : 10s, 2min

  18. Conclusion • Tapestry • An overlay routing network for decentralized P2P • Provides API infrastructure for application developers • Provides efficient and scalable routing of message directly nodes in a large, sparse address space • Resilient in dynamic network

  19. Thanks for your attention ! Any Question ??

  20. Router

More Related