1 / 28

Study of Replication Approach for Video Streaming in P2P Networks

Study of Replication Approach for Video Streaming in P2P Networks. Wilson, W.F. Poon The Chinese University of Hong Kong. Content. Introduction Related Works Full Replication Approach Replication Profile Placement Policy Selection Scheme Experimental Results Erasure Code Approach

enid
Download Presentation

Study of Replication Approach for Video Streaming in P2P Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Study of Replication Approach for Video Streaming in P2P Networks Wilson, W.F. Poon The Chinese University of Hong Kong

  2. Content • Introduction • Related Works • Full Replication Approach • Replication Profile • Placement Policy • Selection Scheme • Experimental Results • Erasure Code Approach • Conclusion

  3. Introduction (1) • Providing video streaming services have long been a research topic • parallel server designs such as RAID • multicast/broadcast transmission schemes • distributed VoD systems (scalability and reliability) • Tremendous growth in computer power in personal computers • peer-to-peer (p2p) systems • Peers contribute storage, content and bandwidth

  4. Introduction (2) • Two types of p2p systems • Unstructured, e.g. KaZaA and Gnutella • Peers are not organized into highly-structured overlays • Content is randomly assigned to peers • Structured, e.g. CAN, Chord • Peers are organized into highly-structured overlays • Distributed hash table (DHT) substrates are used • Keys are deterministically assigned to peers • However, most of these p2p systems are targets for file sharing/web caching services

  5. Introduction (3) • Previous work mainly focused • Search mechanism • Storage management • The work on p2p video streaming has not been thoroughly studied • Investigate whether such a p2p system is applicable to supporting video streaming applications

  6. video replica fragment fragment replica replica fragment serving peer free rider Erasure Code Replication Full Replication Introduction (4) • One of major challenges of a p2p system • Peer machines may be turned on and off in an unpredictable manner • The system experiences very worse availability • Two strategies are considered • Full Replication (whole file) • Erasure Code Replication (data blocks)

  7. Related Works (1) • Unlike the traditional distributed VoD systems • The components of the p2p system experience very worse availability • Works dealt mainly with the system/file availability • Most Frequently Requested (MFR) [KANG02]was proposed to maximize the file hit rate of the structured p2p system • In [LV02] and [COHE02], it studied optimal replication in an unstructured peer-to-peer network • Reduce random search times • Lee et. al. [LEE02] proposed a server-less architecture for video streaming • Use erasure code replication to distribute the data among peers

  8. Peer’s Behavior • Majority of peers had availability rates of under 20 percent (S. Saroiu, P. K. Gummadi, and S. D. Gribble, “A measurement study of peer-to-peer file sharing systems,” MMCN, 2002”)

  9. p2p Streaming System • Similar to traditional VoD systems • Replication strategy • Full Replication VS Erasure Code • Replication Profile • Full Replication: number of replicas of the videos • Erasure Code: ratio of original and redundant data (redundancy overhead) • Data Placement • Full Replication: distribute the replicas of the videos • Erasure code: distribute the data block and redundant block • Peer Selection Policy • Select the peers (servers) to stream the requested video

  10. Full Replication Approach (1) • A network has G peers in which I peers (serving peers)stores a set of J different videos • The other peers (free riders) just make requests but not contribute their resources • Assume •  is the “up” probability of the peers • Tup is the mean up time duration • Tdown is the mean down time duration • Assume • Ni is the amount of shared storage in peer i • bj is the size of video j • qj is the request probability for video j • Cj is the bit rate for video j

  11. Full Replication Approach (2) • sj is number of replicas for video j • Requests to a serving peer for vj is given by • System storage constraints

  12. Full Replication Approach (3) • How to determine the number of replicas, sj, for each videos • Replication Profile • Random • Maximize the hit rate (MaxHit) • Minimize the request rate per peer (MinReq) • Random • In a p2p system, the simplest way is to randomly determine the number replicas for each video • e.g. a user may need to manually copy the videos into the shared storage

  13. Full Replication Approach (4) • MaxHit [KANG02] • The optimal number of replicas can be determined by maximizing the system hit rate Maximize: Subject to • This optimization problem can be efficiently solved by resource allocation algorithms such as dynamic programming • The optimal replication profile:

  14. Full Replication Approach (5) • MinReq • For video streaming, a video request that can be served requires: • The requested video is available in the system • The serving peers have the available bandwidth • The number of replicas should be determined by minimizing the load of the serving peers Minimize: Subject to

  15. Number of Replicas • Assume: • qi follows zipf distribution, zipf factor = 0.271 • All the peers have the same storage and streaming capacity • The length (L) and bit rate (C) of all the videos are the same • I=1500, J=100, =0.1 (Tup=3600s and Tdown=32400s) (a) Storage, S = 2 (b) Storage, S = 10

  16. Full Replication Approach (6) • How to store the replicas of videos among peers • Placement Policy • Random • Smallest Load First (SLF) [ZHOU02] • Random • The replicas of the videos are randomly stored among the peers • The load of the system is imbalance

  17. Placement Policy • Smallest Load First (SLF) • Sort the replicas of the videos in a non-increasing order by request weight, wj • For each iteration • Select I replicas with the greatest request weight • Distribute these I copies to the I peers • (Rules: greatest request weight is placed to the peers with the smallest load AND peer cannot store more than one copy of the same video)

  18. Iteration 1 P2 P3 P4 P1 Iteration 2 P2 P3 P4 P1 Smallest Load First • Example • 4 peers, each can store 2 copies of the videos • 3 videos

  19. Requests per Peer (1) • MinReq • I=1500, J=100, =0.1 (Tup=3600s and Tdown=32400s)

  20. Requests per Peer (2) • MaxHit • I=1500, J=100, =0.1 (Tup=3600s and Tdown=32400s)

  21. Selection Scheme • How to choose a peer to stream a video • Inappropriate serving peers are selected, more requests will be rejected from the system • When a peer requests for a video • Get a list of serving peers storing the requested video • Random • Randomly select one peer in the list as a server • Least Load First (LLF) • Use the current available uplink bandwidth, Bup • Choose a peer in the list with the maximum Bup • Randomly choose a peer if more than one peer has the same Bup • What if the peer leaves the system while serving the other peers • If the serving peer disconnects, the peer who is being served should find another peer based on the above algorithm • If no serving peer is available, the playback will be stopped

  22. Simulation • Run simulation experiments • Request probabilities follow a Zipf distribution with parameters 0.271 • Tup and Tdown follow an exponential distribution • Measure the successful playback of the system • Peers can start the playback until the end of the video

  23. Results – Arrival Rate • Number of peers=1500 • Number of videos=100 • Up time probability=0.1 • Peer storage=10 videos • Video Length=7200s • Successful rate of “MaxHit_SLF_LLF” and “Random” is similar

  24. Results – Peer Availability • Arrival rate=0.04/s • Number of peers=1200 • Number of videos=100 • Peer storage=10 videos • Video Length=7200s • The system provides reliable services: • More peers are willing to share the resources • Peer availability is increased

  25. Is Erasure Code Better? • Erasure code such as Reed Solomon Erasure (RSE) • A video, vj, is divided into I blocks • (I-h) data blocks and h redundant blocks • Blocks are evenly distributed to all the I peers • To recover the video, the user should receive any (I-h) out of I blocks • Using the erasure code replication • System Gains • Require less bandwidth and storage • Maintain the load balance • Overhead • Larger number of messages than in a replicated system • Require real-time scheduling to reassemble data blocks • Maintain the system consistency (e.g. updating redundant data)

  26. If the storage overhead of erasure-code replication is the same as that of the full replication • A peer who requests a video is served by (I-h) serving peers simultaneously Erasure Code • Assume the system maintains the same reliability level for all the videos • Storage overhead for all the videos are the same • If (I-h) out of I blocks can recover the video, the storage overhead is

  27. Full Replication VS Erasure-Code • A portion of peer’s uplink bandwidth is the overhead (a) Storage, S = 2 (b) Storage, S = 10

  28. Conclusion • System availability and bandwidth capacity are important elements to design p2p streaming systems • The system performance can be further improved by the erasure coding scheme • The full replication system is better • Overhead of erasure coding is too high • Develop a model in order to closely examine different parameters • system size • overhead • bandwidth capacity

More Related