html5-img
1 / 31

*Towards A Common API for Structured Peer-to-Peer Overlays

*Towards A Common API for Structured Peer-to-Peer Overlays. Frank Dabek, Ben Y. Zhao , Peter Druschel, John Kubiatowicz, Ion Stoica MIT, U. C. Berkeley, and Rice * Conducted under the IRIS project (NSF). State of the Art. Lots and lots of peer to peer applications

chandler
Download Presentation

*Towards A Common API for Structured Peer-to-Peer Overlays

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. *Towards A Common API for Structured Peer-to-Peer Overlays Frank Dabek, Ben Y. Zhao,Peter Druschel, John Kubiatowicz, Ion Stoica MIT, U. C. Berkeley, and Rice * Conducted under the IRIS project (NSF)

  2. State of the Art • Lots and lots of peer to peer applications • Decentralized file systems, archival backup • Group communication / coordination • Routing layers for anonymity, attack resilience • Scalable content distribution • Built on scalable, self-organizing overlays • E.g. CAN, Chord, Pastry, Tapestry, Kademlia, etc… • Semantic differences • Store/get data, locate objects, multicast / anycast • How do these functional layers relate? • What is the smallest common denominator? IPTPS 2003 ravenben@eecs.berkeley.edu

  3. Some Abstractions • Distributed Hash Tables (DHT) • Simple store and retrieve of values with a key • Values can be of any type • Decentralized Object Location and Routing (DOLR) • Decentralized directory service for endpoints/objects • Route messages to nearest available endpoint • Multicast / Anycast (CAST) • Scalable group communication • Decentralized membership management IPTPS 2003 ravenben@eecs.berkeley.edu

  4. Tier 1 Interfaces IPTPS 2003 ravenben@eecs.berkeley.edu

  5. Structured P2P Overlays CFS PAST i3 SplitStream Bayeux OceanStore Tier 2 Tier 1 DHT CAST DOLR Key-based Routing Tier 0 IPTPS 2003 ravenben@eecs.berkeley.edu

  6. The Common Denominator • Key-based Routing layer (Tier 0) • Large sparse Id space N(160 bits: 0 – 2160 represented as base b) • Nodes in overlay network have nodeIds  N • Given k  N, overlay deterministically maps kto its root node (a live node in the network) • Goal: Standardize API at this layer • Main routing call • route (key, msg, [node]) • Route message to node currently responsible for key • Supplementary calls • Flexible upcall interface for customized routing • Accessing and managing the ID-space IPTPS 2003 ravenben@eecs.berkeley.edu

  7. Application Application forward deliver msg msg msg KBR Layer KBR Layer Flexible Routing via Upcalls • Deliver(key, msg) • Delivers a message to application at the destination • Forward(&key, &msg, &nextHopNode) • Synchronous upcall with normal next hop node • Applications can override messages • Update(node, boolean joined) • Upcall invoked to inform application of a node joining or leaving the local node’s neighborSet IPTPS 2003 ravenben@eecs.berkeley.edu

  8. KBR API (managing ID space) • Expose local routing table • nextHopSet = local_lookup (key, num, safe) • Query the ID space • nodehandle[ ] = neighborSet (max_rank) • nodehandle[ ] = replicaSet (key, num) • boolean = range (node, rank, lkey, rkey) IPTPS 2003 ravenben@eecs.berkeley.edu

  9. Caching DHT Illustrated IPTPS 2003 ravenben@eecs.berkeley.edu

  10. Caching DHT Implementation • Interface • put (key, value) • value = get (key) • Implementation (source S, client C, root R) • Put: route(key, [PUT,value,S])Forward upcall: store valueDeliver upcall: store value • Get: route(key, [GET,C])Forward upcall: if cached, route(C, [value]), FINDeliver upcall: if found, route(C, [value]) IPTPS 2003 ravenben@eecs.berkeley.edu

  11. Ongoing Work • What’s next • Better understanding of DOLR vs. CAST • Decentralized endpoint management • Policies in placement of indirection points • APIs and semantics for Tier 1: (DHT/DOLR/CAST) • KBR API implementation in current protocols • See paper for additional details • Implementation of Tier 1 interfaces on KBR • KBR API support on selected P2P systems IPTPS 2003 ravenben@eecs.berkeley.edu

  12. Backup Slides Follow… IPTPS 2003 ravenben@eecs.berkeley.edu

  13. Our Goals • Protocol comparison • Identify basic commonalities between systems • Isolate and clarify interfaces by functionality • Towards a common API • Easily supportable by old and new protocols • Enable application portability between protocols • Enable common benchmarks • Provide a framework for reusable components IPTPS 2003 ravenben@eecs.berkeley.edu

  14. Key-based Routing API • Invoking routing functionality • Route (key, msg, [node]) • Accessing routing layer • nextHopSet = local_lookup(key, num, safe) • nodehandle[ ] = neighborSet(max_rank) • nodehandle[ ] = replicaSet(key, num) • boolean = range(node, rank, lkey, rkey) • Flexible upcalls: • Delivery (key, msg) • Forward (&key, &msg, &nextHopNode) • Update (node, boolean joined) IPTPS 2003 ravenben@eecs.berkeley.edu

  15. Observations • Compare and contrast • Issues: locality, naming, caching, replication … • Common: general algorithmic approach • Contrast: instantiation, policy, • Revising abstraction definitions • Pure abstraction vs. instantiated prototype • E.g. DHT abstraction vs. DHash • Abstractions “colored” by initial application • E.g. Object location  DOLR  Endpoint Location and Routing • Ongoing understanding of Tier 1 interfaces IPTPS 2003 ravenben@eecs.berkeley.edu

  16. Flexible API for Routing • Goal • Consistent API for leveraging routing mesh • Flexible enough to build higher abstractions • Openness promotes new abstractions • Allow competitive selection to determine right abstractions • Three main components • Invoking routing functionality • Accessing namespace mapping properties • Open, flexible upcall interface IPTPS 2003 ravenben@eecs.berkeley.edu

  17. DOLR on Routing API IPTPS 2003 ravenben@eecs.berkeley.edu

  18. DOLR Implementation • Endpoint E, client C, key K • Publish: route(objectId, [“publish,” K, E])Forward upcall: • store ([K,E]) in local storage • sendToObj: route(nodeId, [n, msg])Forward upcall: • e = Lookup (K) in local storage • For (n  |e|), route (ni, msg) • If (n > |e|), route (nodeId, [n - |e|, msg]) IPTPS 2003 ravenben@eecs.berkeley.edu

  19. Storage API: Overview • linsert(key, value); • value = lget(key); IPTPS 2003 ravenben@eecs.berkeley.edu

  20. Storage API • linsert(key, value): store the tuple <key, value> into local storage. If a tuple with key already exists, it is replaced. The insertion is atomic wrt to failures of the local node. • value = lget(key): retrieves the value associated with key from local storage. Returns null if no tuple with key exists. IPTPS 2003 ravenben@eecs.berkeley.edu

  21. Basic DHT API: Overview • insert(key, value, lease); • value = get(key); • release(key); Upcalls: • insertData(key, value, lease); IPTPS 2003 ravenben@eecs.berkeley.edu

  22. Basic DHT API • insert(key, value, lease): inserts the tuple <key, value> into the DHT. The tuple is guaranteed to be stored in the DHT only for “lease” time. “value” also includes the type of operations to be performed on insertion. Default operation types include • REPLACE: replace value associated with the same key • APPEND: append value to the existing key • UPCALL: generate an upcall to application before inserting • … IPTPS 2003 ravenben@eecs.berkeley.edu

  23. Basic DHT API • value = get(key): retrieves the value associated with key. Returns null if no tuple with key exists in the DHT. IPTPS 2003 ravenben@eecs.berkeley.edu

  24. Basic DHT API • Release(key): releases any tuples with key from the DHT. After this operations completes, tuples with key are no longer guaranteed to exist in the DHT. IPTPS 2003 ravenben@eecs.berkeley.edu

  25. Basic DHT API: Open questions • Semantics? • Verification/Access control/multiple DHTs? • Caching? • Replication? • Should we have leases? It makes us dependent on secure time sync. IPTPS 2003 ravenben@eecs.berkeley.edu

  26. Replicating DHT API • Insert(key, value, numReplicas); adds a numReplicas argument to insert. Ensures resilience of the tuple to up to numReplicas-1 “simultaneous” node failures. Open questions: • consistency IPTPS 2003 ravenben@eecs.berkeley.edu

  27. Caching DHT API Same as basic DHT API. Implementation uses dynamic caching to balance query load. IPTPS 2003 ravenben@eecs.berkeley.edu

  28. Resilient DHT API Same as replicating DHT API. Implementation uses dynamic caching to balance query load. IPTPS 2003 ravenben@eecs.berkeley.edu

  29. Publish API: Overview • Publish(key, object); • object = Lookup(key); • Remove(key, object): IPTPS 2003 ravenben@eecs.berkeley.edu

  30. Publish API • Publish(key, object): ensures that the locally stored object can be located using the key. Multiple instances of the object may be published under the same key from different locations. • object = Lookup(key): locates the nearest instance of the object associated with key. Returns null if no such object exists. IPTPS 2003 ravenben@eecs.berkeley.edu

  31. Publish API • Remove(key, object): after this operation completes, the local instance of object can no longer be located using key. IPTPS 2003 ravenben@eecs.berkeley.edu

More Related