1 / 38

P2P = “Structured Overlay Networks for Peer-to-Peer systems”

P2P = “Structured Overlay Networks for Peer-to-Peer systems”. Luigi Liquori, 97 Ph.D. Università degli studi di Torino 07 H.d.R . “ H abilitation à diriger les recherches ”, École Nat Sup des Mines de Nancy

logan-lott
Download Presentation

P2P = “Structured Overlay Networks for Peer-to-Peer systems”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. P2P =“Structured Overlay Networksfor Peer-to-Peer systems” • Luigi Liquori, • 97 Ph.D. Universitàdeglistudidi Torino • 07 H.d.R. “Habilitation àdiriger les recherches”, École Nat Sup des Mines de Nancy • 08 “Habilité au fonctions de professeurs de Universités’’ 2008-2012 • 10 Research Director, Équipe LogNet, INRIA Sophia AntipolisMéditerranée

  2. Course setting • COURSE = P2P : “Future Internet and Overlay Networks” • HH = around 40h (20h by me and +20h by Prof. Legout) • MODULE = “Structured Overlay Networks P2P systems” • Exams = SW project • PRE = General notions of systems and networks • OPT = “Computability”, “Data-bases”, “Logics”, “Security” • POST = “Design, analyze and implement p2p networks and overlay-based applications”

  3. Topics : structured overlay networks for P2P • CHORD (Stoica & al), lessons • Academic and pedagogical a sort of “PASCAL/BASIC” in overlay networks • KADEMLIA (Maymounkov and Mazières), lessons • Academic with free implementations and widely used (emule) • BIT TORRENT, (Cohen) ½ of the course <<<< A. LEGOUT • Non academic, with free implementations and widely used • SKYPE (Zennström & Friis), lesson • Non academic : open, use Kademlia, very widely used for VOIP • INTERCONNECTING OVERLAY NETW (Liquori), lesson • Academic but practical : allow different overlay protocols to communicate • NAT TRAVERSAL, lessons • How stablish and maintain TCP/IP/UDP connections traversinggateways

  4. Other issues (micro survey) in the course • Publish/subscribe paradigm • Content-centric routing (Jacobson & al. CONEXT 09) • Ontologies for internet computability • Coordination languages to deal with algorithmic aspects • Trust and reputation issues • Denial of service attacks • “Inter netting” overlay networks • Principles of “Internet computability”

  5. “Spot on” • A quick window of the module (lesson 1) • Preludio : some internet facts • Course vision : “Computer scale-up to internet” • Step 1 : Reference model of future internet • Step 2 : Reference model of internet computer • Inside submodule “Structured overlay networks for p2p” (lessons 3-4) • Chord, lecture • Topology, routing, and churn • Kademlia, lecture • Topology • Routing (put, get), • Churn (join, leave) • Inter-netting structured overlay networks (lesson 7) • Dealing with network partitions

  6. Preludio: some internet facts • Internet traffic : ~80% is P2P and ~20% is Web • Some leading p2p protocols : • Some leading p2p class of applications : file exchange • In progress: VOIP, TVIP, STREAMIP, CLOUD • General p2p anarchy : no coordination, no cooperation • Total p2p heterogeneity : protocols, topology, security, devices, users .. • P2P “inter-routing” is almost impossible • Often with the same purpose but ≠ routing and topologies • Actor 1 : Resource discovery • Actor 2 : Resource coordination • “ …les ingredients pour … un modèle de calcul pour l’internet ! ” • Actor 3 : Peers organization • Actor 4 : Peers reputation

  7. Course vision : “Computer scale-up to internet” • 1946. von Neumann. “Principle of large scale computing machines” • “Large Scale” in 1946 means ENIAC • 1946-2010. From ENIAC to Cray XT5 Jaguar and G5K via iPhone • 20XX. “Large scale” means “Internet scale” • Von Neumann architecture does not scale-up to internet

  8. Step 1 : reference model of future internet • “Inter-netting” heterogeneous overlay networks • The “Cerf & Kahn ’77” cannot lead to a standardized p2p communication layer • Backward compatibility of all existent p2p protocols • P2P inventors are often next door computer scientists or users’ communities • Competition vs. Collaboration • Interconnecting etherogeneous ON • Exaustive routing is almost achieved • Content-based routing (Jacobson) • Logical payload • Hybrid topologies and underlay networks • Peers organization via social-based & reputation primitives • Genericity : add many services on the top of the ON

  9. Step 2 : reference model of an internet computer • Internet Computer (IC) : abstraction on top of an overlay network • Peers are physically connected via IP/adhoc/MANET • Peers are logically organized in a Virtual Organization • IC Reference model • Bus = Internet and routing • Memory = ΣkKDHTk(distributed hash table) • CPU = ΣkKCPUk(distributed central units) • IC Programming model • Language = Protocol • Word = Packet • Pointer = Address • Type = Port • Universality, Genericity, Polymorphism, “Turing completeness” • Virtual intermittence • Resource discovery • Reputation • Orchestration

  10. “Spot on” • A quick window of the module (lesson 1) • Preludio : some internet facts • Course vision : “Computer scale-up to internet” • Step 1 : Reference model of future internet • Step 2 : Reference model of internet computer • Inside submodule “Structured overlay networks for p2p” (lessons 3-4) • Key figures (reminder) • Chord, lecture • Topology, routing, and churn • Kademlia, lecture • Topology • Routing (put, get), • Churn (join, leave) • Inter-netting structured overlay networks (lesson 7)

  11. General picture of overlay networks B A C Overlay Network Treatn hops through IP network asm (less than m) hopsin an overlay network Physical Network

  12. Key figures in SON (reminder) • Data discovery is deterministic : a.k.a. 2nd generation overlays • Distributed Hash Table (DHT) : stores (key, value) pairs in nodes • Key-based routing : N.lookup(K) route from the node N generator of the lookup to the node M that owns the key K via a routing path of “closer” nodes (according to a given metric distance in a logical key space) • Routing table : local table that maintain links to other nodes • Churn : rate of node joins and leaves in a p2p network

  13. Key figures in SON (cont’d) • Overlay topologies • Exhaustive lookup with logarithmic complexity • Uniformity vs. proximity of key storage • Consistent hashing of keys and IPs via SHA-1 • Peer join • Getting a logic ID • Positioning into the overlay structure • Stabilize the overlay (maintenance) • Opportunistic vs. Active maintenance of routing tables • Bootstrapping of an overlay network • Peer leave • Faulty routing tables • Fair play vs. non fair play leaving

  14. Chord 1 : Consistent Hashing • SHA-1: {IP} U {KEYS} -> NAT • SHA-1(IP) = NIP • SHA-1(fookey) = Kfoo • Node Nx stores all keys Ky such that Nx ≤ Ky < pred(Nx)

  15. Chord 2 : (Local) Finger Tables • On every node N • finger : array[1...m] • 2m is the logical space • finger[k] = succ(N + 2k-1) mod 2m with 1 ≤ k ≤ m

  16. Chord 3 : Recursive routing 8 < finger[6] ≤ 54 42

  17. Chord 4 : Churn

  18. Chord 5 : Bootstrapping

  19. Chord 6 : Stabilization

  20. Kademlia 1 • Peer-to-peer (key,value) storage and lookup system • A number of desirable features not simultaneously offered by any previous peer-to-peer system • It minimizes the number of messages to learn topology • Stabilization spreads automatically during key lookup • Nodes can route queries through low-latency paths • Parallel, asynchronous msg to avoid timeout delays from failed nodes • Basic mechanisms to resists to certain basic denial of service attacks

  21. Kademlia 2 • Keys are “opaque”, 160-bit quantities • Participating computers each have a node ID in the 160-bit key space. (key, value) pairs are stored on nodes with IDs “close” to the key for some notion of closeness • A node-ID-based routing algorithm lets anyone locate servers near a destination • XOR metric for distance between points in the key space • XOR is symmetric, allowing Kademlia participants to receive lookup queries from precisely the same distribution of nodes contained in their routing tables

  22. XOR metric 1 • Given two 160-bit identifiers, x and y, Kademlia defines the distance between them as their bitwise exclusive or (XOR) interpreted as an integer, i.e. d (x, y) = x⊕y • d (x, x) = 0 • d (x, y) > 0 if x ≠ y, • For all x, y. d (x, y) = d (y, x) • d (x, y) + d (y, z) ≧ d (x, z) • d (x, y) ⊕d (y, z) = d (x, z) • For all a≧0, b≧0. a + b ≧ a ⊕b

  23. XOR metric 2 • XOR is unidirectional • For any given point x and distance △ > 0, there is exactly one point y such that d (x, y) = △ • Unidirectionality ensures that all lookups for the same key converge along the same path, regardless of the originating node • Caching (key, value) pairs along the lookup path alleviates hot spots • XOR topology is also symmetric • d (x, y) = d (y, x) for all x and y

  24. XOR : do it ….

  25. Topology : do it ….

  26. Node state 1 • For each 0 ≦ i < 160, every node keeps a list of (IP address, UDP port, Node ID) triples for nodes of distance between 2i and 2i+1 from itself • We call these lists “k-buckets” • The size is not fixed a priori but is chosen such that any given k nodes are very unlikely to fail within an hour of each other (for example k = 20) i-bucket

  27. Node state 2 • Each k-bucket is kept sorted by time last seen • Least-recently seen node at the head • Most-recently seen at the tail

  28. Build buckets : do it ….

  29. Node state 3 • When a Kademlia node receives any message (request or reply) from another node, it updates the appropriate k-bucket for the sender’s node ID • If the sending node already exists in the recipient’s k-bucket, the recipient moves it to the tail of the list • If the node is not already in the appropriate k-bucket and the bucket has fewer than k entries, then the recipient just inserts the new sender at the tail of the list • If the appropriate k-bucket is full, then the recipient pings the k-bucket’s least-recently seen node to decide what to do • If the least-recently seen node fails to respond, it is evicted from the k-bucket and the new sender inserted at the tail • If the least-recently seen node responds, it is moved to the tail of the list, and the new sender’s contact is discarded

  30. Routing and upgrade buckets : do it ….

  31. Kademlia in the P2P system 1 Key words table • In the eMule,a P2P file exchange software, Kademlia network has two table : the Key words table and the Data index table To Find: Wii tips and tricks.pdf Key words: Wii, tricks Hash(Wii) Hash()=>SHA-1,160bit Hash(trick)

  32. Kademlia in the P2P system 2 Data index table • Data index table: To Find: 1011…001 1011…001

  33. Nodes in Kademlia

  34. Files in Kademlia

  35. Keywords in Kademlia Hash of “break” only !

  36. “Spot on” • A quick window of the module (lesson 1) • Preludio : some internet facts • Course vision : “Computer scale-up to internet” • Step 1 : Reference model of future internet • Step 2 : Reference model of internet computer • Inside submodule “Structured overlay networks for p2p” (lessons 3-4) • Chord (previous lecture) • Topology, routing, and churn • Kademlia (this lecture) • Topology • Routing (put, get), • Churn (join, leave) • Inter-netting structured overlay networks (lesson 7) • Dealing with network partitions

  37. Inter netting structured overlay networks • Example 1: Two DHT-based overlay networks (key,value) • One pair is stored in DHT1 and searched for in DHT2 • Many pairs stored in both DHTs can be found • Two companies wishing to share/aggregate information • Better fault-tolerance, and data availability • Example 2: An Overlay Network get some nodes isolated • So called “network partitions” • Alternative physical routing via ON inter-routing

  38. (Techi) Inside Protocols [RunW=Intel,Time ≥ 10m) AND[ProgW=LINUX, Distro=DEB] OR [RunW=Intel,Time ≥ 10m) AND[ProgW=LINUX, Distro=OSX] OR [RunW=AMD, Time ≥ 10m] AND[ProgW=VISTA, Distro=BUG] • VIP:SREG (Id,Mode,FromCard,ToCard, Payload) • VIP: SUPD (Id,Mode,FromCard,ToCard, Payload) • RDP: SREQ (Id,Mode,FromCard,ToCard, Payload • RDP: SRESP (Id,Mode,FromCard,ToCard, Payload) • RDP: SNOTIF (Id,Mode,FromCard,ToCard) • Mode{LOGIN, LOGOUT, ACCEPT, REJECT, LOOP, ☺, ☠,…} • Card = (IP-PORT-PKI) • Service ::= HumW | RunW | StockW | ProgW | DataW | LinkW • Payload ::= ORi=1..m(ANDj=1..n j)i where ::= (Service,Constraints*) | NOT() Payload looks like a first-order logic language… Pattern-matching algorithms and Constraint Logic Programming for routing content-based networks

More Related