1 / 32

Pseudo-DHT: Distributed Search Algorithm for P2P Video Streaming

Pseudo-DHT: Distributed Search Algorithm for P2P Video Streaming. December 15, 2008 Jeonghun Noh Stanford University Sachin Deshpande Sharp Laboratories of America. P2P Live Video Streaming. Relaying video using uplink bandwidth. Video source.

kylie-garza
Download Presentation

Pseudo-DHT: Distributed Search Algorithm for P2P Video Streaming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pseudo-DHT: Distributed Search Algorithm for P2P Video Streaming December 15, 2008 Jeonghun Noh Stanford University Sachin Deshpande Sharp Laboratories of America

  2. P2P Live Video Streaming • Relaying video using uplink bandwidth Video source Live video: no needs for locating video chunks

  3. P2P Time-Shifted Streaming • Local storage to store fractions of the video • To locate video at arbitrary point, a query server may be used 7~11m 3~6m 0~4m Seeking video of position 5m • The server can become a bottleneck as peer population increases • No dedicated server may be available • A scalable distributed content search is desired

  4. Outline • P2TSS system • Pseudo-DHT: Distributed Search • Performance evaluation • Numerical analysis • Simulation study

  5. Peer 1 stores Video [0m, 4m) from 9:00am Source peer Peer 2 watches from 0m at 9:10am Introduction to P2TSS • P2TSS (P2P Time-shifted Streaming System) • Serves both live streaming and time-shifted streaming • Time-shifted stream (TSS)is the same as the original stream except being delayed in time • Peers store a fraction of video • Cached video chunks are later served to other peers [Deshpande et al., 2008]

  6. Peer3 Peer 2 Peer 1 x2 x3 x1 t2 t3 t1 Caching Live Stream • Peers cache live stream with another video connection Cached portion Playback trajectory Video position Live stream Time

  7. Chord: Distributed Lookup • Chord: a distributed lookup overlay [Stoica et al., 2001] • Nodes (peers) are connected as a circle • Keys are mapped to successor node • Fast lookup by a finger table. Lookup latency: O( log(Np) ) • An example key/node space • Nodes and keys sorted by IDs • ID length: 6bits • Can be normalized to [0,1) N: node K: key

  8. Building Lookup Overlay • Node is entered to overlay as peer joins • Node ID: uniformly drawn between [0,1) • Node is placed in a distributed manner • (Key, Value) is entered as peer registers buffer status • Key (K): hashed video chunk ID ( 0 ≤ K < 1) • Value (V): peer network address • (K,V) is mapped to the successor node • Example 0 1

  9. Pseudo-DHT: Registration • Register (K, V) may result in “collision” (i, Peer 2) (i, Peer 1) Chunk ID i • RepeatRegister (K’, V)until there is no collision • K’ = K + (n-1)Δ (n=# attempts, Δ=offset base) 1st attempt 2nd attempt : Occupied ID • Unlike original DHT, single key-multiple values is discouraged • Leads to better load balancing and low latency retrieval Chunk ID empty i-1 i

  10. Pseudo-DHT: Retrieval • Seeking Video Chunk I • Retrieve(i) may return a “miss” • K’ = i - (n-1)Δ (n=# attempts, Δ=offset base) • Repeat Retrieve(K’) with different keys until a hit occurs • Best-effort search, different from original DHT 3rd 2nd 1st attempt : Video chunks available : Video Chunk Header miss miss Chunk ID i-2 i-1 i Chunks delivered to the consumer peer

  11. Outline • P2TSS system • Pseudo-DHT: Distributed Search • Performance evaluation • Numerical analysis • Simulation study

  12. Preliminaries for Analysis • Symbols • Np: Number of peers (or nodes) • L: Video length (in secs) • Q: Video chunk size (in secs) • : Ratio between Np and available slots M (=L/Q)

  13. Registration Latency • Independence model for the number of collision, C • More sophisticated model where Adenotes the number of peer arrival in Q seconds

  14. Experimental Setup • Pseudo-DHT implemented in PlanetSim [Garcia et al, 2004] • Simulation setup • Maximum successive lookup: 4 • Video length L: 7200s • Buffer size D: 240s • Chunk size Q: 5, 10, 15, 30s

  15. Registration Latency • When  is small, models match simulation results • Registration latency of both forward/backward key change is identical Video chunk size (Q): 5s

  16. Retrieval Latency • Empirical statistics • Xk: 1 if video chunk Bk is occupied. 0 otherwise • Conditional hit probability Pr (Xi-j=1| Xi=0), ( j: offset base ) • With a larger offset base Δ, the correlation between successive retrievals becomes weaker • Modeling retrieval latency N:

  17. Offset Base and Retrieval Latency • Retrieval latency decreases as an offset base increases Video chunk size Q: 5s

  18. Video Chunk Size and Retrieval Latency • As chunk size Q increases, retrieval latency decreases

  19. Overhead of Key Storage • Balls and bins problem • How many nodes (bins) hold k keys (balls)? • Bins are created by nodes • Random keys fall into [0,1) uniformly • Statistically, larger bins will receive more balls 0 1 Key/Node space

  20. Overhead of Key Storage • One node’s probability of storing k keys • Observations • Low overhead: keys are spread out on the overlay • 50% of nodes store no keys when N=K N=300 K=300

  21. Conclusion and Future Work • Proposed Pseudo-DHT • Allows peers to register / retrieve video chunk in a scalable way • Slightly different from original DHT due to video continuity • Spreads out (key, value) items over the overlay • P2TSS and Pseudo-DHT • Application to a P2P system • Thorough evaluation with analysis and simulations • Future research topics • Changing Q dynamically according to peer population size • Considering heterogeneous peer uplink capacity for registration

  22. Thank you!

  23. Backup Slides

  24. Preliminaries for Analysis • Video chunk ID space M-1 0 X0 X2 XM-1 X1

  25. Sample Key/Node Space • Interval between nodes is not constant Nodes (peers) Keys (DSB info) Sample key/node space

  26. Chord: Distributed Lookup • Chord: a distributed lookup overlay [Stoica et al., 2001] • Nodes (peers) and keys are mapped to a common space • Fast lookup by a finger table. Lookup time: O( log(Np) ) • An example key/node space 0 N: node K: key

  27. Caching Video • Peers locally determine which portion to cache • Distributed Stream Buffer (DSB) • Peer’s local buffer to hold a fraction of video • A finite size of cache (e.g., size of 2 to 4 minutes of video) • Independent of playback buffer • Static contents in cache • No content change once the cache is full • Provides a bottom line performance

  28. A lookup on Chord Overlay • Fast search using a finger table. • Each node has more detailed knowledge about nodes closer to them.

  29. Registration • Approach 2:Simplified dependence model Ai : Number of insertions for key i ( Number of arrivals during the slot i)

  30. Simultaneous Retrieval

  31. Retrieval Analysis • With larger offsets • (+) The number of lookups decreases slightly. • (-) Peers have to switch earlier to another peer. • The simulation results match the model • Parameters • Np = 500 • L = 7200 • D = 480s • Q = 5s • Number of slots: 1441 (M = 1440)

  32. Analysis: Overhead for Key Storage • Poisson process property: • Given N(t) = N, N arrival times are independently uniformly distributed. • In Poisson process, interarrival time between events is exponentially distributed. • The converse is also true. Finally,

More Related