1 / 21

Peer-to-Peer Systems (cntd.)

INF5070 – Media Servers and Distribution Systems:. Peer-to-Peer Systems (cntd.). 15/11 – 2004. CFS – Cooperative File System. Data Block. Directory Block. Inode Block. Root Block. H(F). H(B1). B1. H(D). D. F. Data Block. Public key. H(B2). B2. signature.

elmermorris
Download Presentation

Peer-to-Peer Systems (cntd.)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INF5070 – Media Servers and Distribution Systems: Peer-to-Peer Systems(cntd.) 15/11 – 2004

  2. CFS – Cooperative File System

  3. Data Block Directory Block Inode Block Root Block H(F) H(B1) B1 H(D) D F Data Block Public key H(B2) B2 signature CFS – Cooperative File System • Design Challenges • Distribute files in a peer-to-peer manner • Load Balancing — spread storage burden evenly • Fault tolerance — tolerate unreliable participants • Speed — comparable to whole-file FTP • Scalability — avoid O(#participants) algorithms • CFS approach based on Chord • Blocks of files are indexed using Chord • Root block found by file name, refers to first directory block • Directory blocks contain hashes of blocks

  4. Replicate blocks at r successors N5 N10 N110 N20 1 Block 17 N99 2 N40 N50 N80 N60 N68 CFS Replication Mechanism • Increase availability (fault tolerance) • Increase efficiency (server selection) • The predecessor of the data block returns its successors and their latencies • Client can choose the successor with the smallest latency • Ensures independent replica failure

  5. CFS Cache Mechanism CFS Copies to Caches Along Lookup Path N5 N10 N110 1. N20 N99 2. N40 4. RPCs: 1. Chord lookup 2. Chord lookup 3. Block fetch 4. Send to cache N50 N80 3. N60 N68 Lookup(BlockID=45)

  6. Pastry

  7. Pastry • “Competitor” to Chord and Tapestry (and CAN and …) • Approach taken • Only concerned with efficient indexing • Distributed index • Decentralized lookup service for key/value pairs • Uses DHTs • Content handling is an external problem entirely • No relation to content • No included replication or caching • P2P aspects • Every node must maintain keys • Adaptive to membership changes • Leaf nodes • Client nodes act also as file servers

  8. Pastry • DHT approach • Each node has Unique nodeId • Assigned when node joins • Used for routing • Each Message has a key • NodeIds and keys are in base 2b • b is configuration parameter with typical value 4 (hexadecimal digits) • Pastry node routes the message to the node with the closest nodeId to the key • Number of routing steps is O(log N) • Pastry takes into account network locality • Each node maintains • Routing table is organized into log2b N rows with 2b-1 entry each • Neighborhood set M— nodeId’s, IP addresses of M closest nodes, useful to maintain locality properties • Leaf set L— set of L nodes with closest nodeId

  9. Pastry Routing b=2, so nodeId is base 4 (16 bits) Contains the nodes that are numerically closest to local node Entries in the mth row have m as next digit  log2b N  rows nth digit of current node Entries in the nth column share the first n digits with current node common prefix – next digit – rest Contains the nodes that are closest to local node according to proximity metric Entries with no suitable nodeId are left empty 2b-1 entries per row

  10. Pastry Routing source 2331 X1: 1-0-30 | 1-1-23 | 1-2-11 | 1-3-31 1331 X0: 0-130 | 1-331 | 2-331 | 3-001 1211 dest 1223 X2: 12-0-1 | 12-1-1 | 12-2-3 | 12-3-3 1221 L: 1232 | 1221 | 1300 | 1301

  11. Pastry • Assessment • Scalability, fairness, load balancing • Distributed index of arbitrary size • Support for physical locality and locality by hash value • Stochastically logarithmic search effort • Network topology is partially accounted for, given an additional metric for physical locality • Stochastically logarithmic lookup in large systems, variable lookup costs • Content location • Search by hash key: limited ways to formulate queries • All indexed files are reachable • Not restricted to file location • Failure resilience • No single point of failure • Several possibilities for backup routes

  12. Streaming in Peer-to-peer networks

  13. Peer-to-peer network

  14. Promise

  15. Promise • Video streaming in Peer-to-Peer systems • Video segmentation into many small segments • Pull operation • Pull from several sources at once • Based on Pastry and CollectCast • CollectCast • Adds rate/data assignment • Evaluates • Node capabilities • Overlay route capabilities • Uses topology inference • Detects shared path segments • Using ICMP similar to traceroute • Tries to avoid shared path segments • Labels segments with quality (or goodness)

  16. Promise [Hafeeda et. al. 03] Each active sender: • receives a control packet specifying which data segments, data rate, etc., • pushes data to receiver as long as no new control packet is received active sender standby sender active sender standby sender The receiver: • sends a lookup request using DHT • selects some active senders, control packet • receives data as long as no errors/changes occur • if a change/error is detected, new active senders may be selected Thus, Promise is a multiple sender to one receiver P2P media streaming system which 1) accounts for different capabilities, 2) matches senders to achieve best quality, and 3) dynamically adapts to network fluctuations and peer failure active sender Receiver

  17. SplitStream

  18. SplitStream • Video streaming in Peer-to-Peer systems • Uses layered video • Uses overlay multicast • Push operation • Build disjoint overlay multicast trees • Based on Pastry

  19. SplitStream [Castro et. al. 03] Each node: • joins as many multicast trees as there are stripes (K) • may specify the number of stripes they are willing to act as router for, i.e., according to the amount of resources available Source: full quality movie Stripe 1 Each movie is split into K stripes and each stripe is multicasted using a separate three Thus, SplitStream is a multiple sender to multiple receiver P2P system which distributes the forwarding load while respecting each node’s resource limitations, but some effort is required to build the forest of multicast threes Stripe 2

  20. SplitStream • Easy to build disjoint trees • Disjoint tree • Guaranteed • in the overlay • Contrast to disjoint tree in Promise • Best-effort only • At the IP level Example: Use source nodes: 02132310, 33132101 Use degree 2 Inner nodes from 33132102 ● 33-3-02312 ● 333-3-1203 ● 333-2-0201 ● 3-2-1133203 ● 32-3-32202 ● 32-2-13232 Inner nodes from 021323310 ● 0-0-321001 ● 00-0-20231 ● 003-0-2102 ● 0-1-223102 ● 01-0-02100 ● 01-1-20321

  21. Some References • M. Castro, P. Druschel, A-M. Kermarrec, A. Nandi, A. Rowstron and A. Singh, "SplitStream: High-bandwidth multicast in a cooperative environment", SOSP'03, Lake Bolton, New York, October 2003 • Mohamed Hefeeda, Ahsan Habib, Boyan Botev, Dongyan Xu, Bharat Bhargava, "Promise: Peer-to-Peer Media Streaming Using Collectcast", ACM MM’03, Berkeley, CA, November 2003 • Sitaram, D., Dan, A.: “Multimedia Servers – Applications, Environments, and Design”, Morgan Kaufmann Publishers, 2000 • Tendler, J.M., Dodson, S., Fields, S.: “IBM e-server: POWER 4 System Microarchitecture”, Technical white paper, 2001 • Tetzlaff, W., Flynn, R.: “Elements of Scalable Video Servers”, IBM Research Report 19871 (87884), 1994 • Intel, http://www.intel.com • MPEG.org, http://www.mpeg.org/MPEG/DVD • nCUBE, http://ncube.com

More Related