html5-img
1 / 36

Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications

Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications. Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley. A peer-to-peer storage problem. 1000 scattered music enthusiasts Willing to store and serve replicas

josie
Download Presentation

Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley

  2. A peer-to-peer storage problem • 1000 scattered music enthusiasts • Willing to store and serve replicas • How do you find the data?

  3. The lookup problem N2 N1 N3 Key=“title” Value=MP3 data… Internet ? Client Publisher Lookup(“title”) N4 N6 N5

  4. Centralized lookup (Napster) N2 N1 SetLoc(“title”, N4) N3 Client DB N4 Publisher@ Lookup(“title”) Key=“title” Value=MP3 data… N8 N9 N7 N6 Simple, but O(N) state and a single point of failure

  5. Flooded queries (Gnutella) N2 N1 Lookup(“title”) N3 Client N4 Publisher@ Key=“title” Value=MP3 data… N6 N8 N7 N9 Robust, but worst case O(N) messages per lookup

  6. Routed queries (Freenet, Chord, etc.) N2 N1 N3 Client N4 Lookup(“title”) Publisher Key=“title” Value=MP3 data… N6 N8 N7 N9

  7. Routing challenges • Define a useful key nearness metric • Keep the hop count small • Keep the tables small • Stay robust despite rapid change • Freenet: emphasizes anonymity • Chord: emphasizes efficiency and simplicity

  8. Chord properties • Efficient: O(log(N)) messages per lookup • N is the total number of servers • Scalable: O(log(N)) state per node • Robust: survives massive failures • Proofs are in paper / tech report • Assuming no malicious participants

  9. Chord overview • Provides peer-to-peer hash lookup: • Lookup(key)  IP address • Chord does not store the data • How does Chord route lookups? • How does Chord maintain routing tables?

  10. Chord IDs • Key identifier = SHA-1(key) • Node identifier = SHA-1(IP address) • Both are uniformly distributed • Both exist in the same ID space • How to map key IDs to node IDs?

  11. Consistent hashing [Karger 97] Key 5 K5 Node 105 N105 K20 Circular 7-bit ID space N32 N90 K80 A key is stored at its successor: node with next higher ID

  12. Basic lookup N120 N10 “Where is key 80?” N105 N32 “N90 has K80” N90 K80 N60

  13. Simple lookup algorithm Lookup(my-id, key-id) n = my successor if my-id < n < key-id call Lookup(id) on node n // next hop else return my successor // done • Correctness depends only on successors

  14. “Finger table” allows log(N)-time lookups ½ ¼ 1/8 1/16 1/32 1/64 1/128 N80

  15. Finger i points to successor of n+2i N120 112 ½ ¼ 1/8 1/16 1/32 1/64 1/128 N80

  16. Lookup with fingers Lookup(my-id, key-id) look in local finger table for highest node n s.t. my-id < n < key-id if n exists call Lookup(id) on node n // next hop else return my successor // done

  17. Lookups take O(log(N)) hops N5 N10 N110 K19 N20 N99 N32 Lookup(K19) N80 N60

  18. Joining: linked list insert N25 N36 1. Lookup(36) K30 K38 N40

  19. Join (2) N25 2. N36 sets its own successor pointer N36 K30 K38 N40

  20. Join (3) N25 3. Copy keys 26..36 from N40 to N36 N36 K30 K30 K38 N40

  21. Join (4) N25 4. Set N25’s successor pointer N36 K30 K30 K38 N40 Update finger pointers in the background Correct successors produce correct lookups

  22. Failures might cause incorrect lookup N120 N10 N113 N102 Lookup(90) N85 N80 N80 doesn’t know correct successor, so incorrect lookup

  23. Solution: successor lists • Each node knows r immediate successors • After failure, will know first live successor • Correct successors guarantee correct lookups • Guarantee is with some probability

  24. Choosing the successor list length • Assume 1/2 of nodes fail • P(successor list all dead) = (1/2)r • I.e. P(this node breaks the Chord ring) • Depends on independent failure • P(no broken nodes) = (1 – (1/2)r)N • r = 2log(N) makes prob. = 1 – 1/N

  25. Lookup with fault tolerance Lookup(my-id, key-id) look in local finger table and successor-list for highest node n s.t. my-id < n < key-id if n exists call Lookup(id) on node n // next hop if call failed, remove n from finger table return Lookup(my-id, key-id) else return my successor // done

  26. Chord status • Working implementation as part of CFS • Chord library: 3,000 lines of C++ • Deployed in small Internet testbed • Includes: • Correct concurrent join/fail • Proximity-based routing for low delay • Load control for heterogeneous nodes • Resistance to spoofed node IDs

  27. Experimental overview • Quick lookup in large systems • Low variation in lookup costs • Robust despite massive failure • See paper for more results Experiments confirm theoretical results

  28. Chord lookup cost is O(log N) Average Messages per Lookup Number of Nodes Constant is 1/2

  29. Failure experimental setup • Start 1,000 CFS/Chord servers • Successor list has 20 entries • Wait until they stabilize • Insert 1,000 key/value pairs • Five replicas of each • Stop X% of the servers • Immediately perform 1,000 lookups

  30. Massive failures have little impact (1/2)6 is 1.6% Failed Lookups (Percent) Failed Nodes (Percent)

  31. Related Work • CAN (Ratnasamy, Francis, Handley, Karp, Shenker) • Pastry (Rowstron, Druschel) • Tapestry (Zhao, Kubiatowicz, Joseph) • Chord emphasizes simplicity

  32. Chord Summary • Chord provides peer-to-peer hash lookup • Efficient: O(log(n)) messages per lookup • Robust as nodes fail and join • Good primitive for peer-to-peer systems http://www.pdos.lcs.mit.edu/chord

  33. Join: lazy finger update is OK N2 N25 K30 N36 N40 N2 finger should now point to N36, not N40 Lookup(K30) visits only nodes < 30, will undershoot

  34. CFS: a peer-to-peer storage system • Inspired by Napster, Gnutella, Freenet • Separates publishing from serving • Uses spare disk space, net capacity • Avoids centralized mechanisms • Delete this slide? • Mention “distributed hash lookup”

  35. CFS architecturemove later? Block storage Availability / replication Authentication Caching Consistency Server selection Keyword search Lookup Dhash distributed block store Chord • Powerful lookup simplifies other mechanisms

More Related