1 / 28

Push-to-Peer Video-on-Demand System

Push-to-Peer Video-on-Demand System. Abstract. Content is proactively push to peers, and persistently stored before the actual peer-to-peer transfers. Content placement and associated pull policies that allow optimal use of uplink bandwidth.

Download Presentation

Push-to-Peer Video-on-Demand System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Push-to-Peer Video-on-Demand System

  2. Abstract • Content is proactively push to peers, and persistently stored before the actual peer-to-peer transfers. • Content placement and associated pull policies that allow optimal use of uplink bandwidth. • Performance analysis of such policies in controlled environments such ad DSL networks under ISP control. • A distributed load balancing strategy for selection of serving peers.

  3. Outline • Introduction • Network Setting and Push-to Peer Operation • Data Placement and Pull Policies • Randomized Job Placement • Conclusion

  4. Introduction

  5. Introduction • Pull-based system design: a peer pulls content only if the content is of interest. • Our objective is to design a Push-to-Peer VoD System. • Video is first pushed to peers. • This first step is performed under provider or content-owner control, can be performed during times of low network utilization. • A peer may store content that it itself has not interest in, unlike traditional pull-only P2P system.

  6. Introduction • This system is applicable to cases in which peers are long-lived and willing to have content proactively pushed to them. • We consider the design: • In a network of long-lived peers where upstream bandwidth. • Peer storage are the primary limiting resources. • Constant available bandwidth among the peers.

  7. Network Setting and Push-to Peer Operation

  8. Network Setting and Push-to Peer Operation • Describe the network setting for this system. • Overview the push and pull phases of operation. • Describe our video playback model.

  9. Three Components • The Push-to-Peer system comprises a content server, a control server and boxes at the user premises (STBs). • User Premise: STBs, coutomers. • Content Server: • located in the content provider’s premise, • push content to boxes. • Control Server: • located in the content provider’s premise, • provides a directory service to boxes, • management and control functionalities.

  10. Two Phases • Content distribution proceeds in two phase: • Push Phase: • Content server push content to each boxes, • This occurring periodically: • when bandwidth is plentiful, • in background, low priority mode. • After push content, content server then disconnect, does not provide additional content. • What portions of which videos should be placed on which boxes?

  11. Two Phases • Pull Phase: • Boxes respond to user command to play content. • Boxes don’t have all of the needed content at the end of the push phase. • We don’t consider the possibility of the boxes proactively push content among themselves.

  12. Assumption • Assumption about DSL network and the boxes. • Upstream and downstream bandwidth • Peer storage • Peer homogeneity

  13. Upstream and downstream bandwidth • The upstream bandwidth is a constrained resource, most likely smaller than the video encoding, playback rate. • When a peer uploads video to N different peers, the upstream bandwidth is equally shared among those peers. • Video is transferred reliably. • Downstream bandwidth is sufficiently large that it is never the bottleneck when download video. • The downstream bandwidth is larger than the video encoding, playback rate.

  14. Peer storage • Boxes have hard-disk that can store content. • The disk may also store movie prefixes, that are used to decrease startup delay. • We don’t consider the play-out buffer.

  15. Peer homogeneity • All peers have the same upstream bandwidth and the same amount of hard disk storage.

  16. Video Playback Model • Each movie is chopped into windows of contiguous data of size W. • A full window needs to be available to the user before it can be played. • A user can play such a window once it is available, without waiting for subsequent data. • The window size is tunable parameter. • The window is a unit of random access to a video. • The window allows us to support VCR optionatios. • Each window is further divide into smaller data block.

  17. Video Playback Model • Blocking Model: when a new request cannot be served, the request is dropped. • Waiting Model: the request is enqueued.

  18. Data Placement and Pull Policies

  19. Data Placement and Pull Policies • Full-Striping Data Placement • Code-Based Data Placement • We assume that there M boxes. • Each window of a video is W.

  20. Full-Striping Data Placement • Stripes each window of a movie over all M boxes. • Every window is divided into M blocks, each of size is W/M. • Each block is pushed to only one box. • Each box stores a distinct block of a window. • A full window is reconstructed at a box by concurrently downloading M-1 distinct blocks from the other M-1 boxes. • A download request generates M-1 sub-request.

  21. Full-Striping Data Placement

  22. Sub-Request Limited • The number of sub-requests that a box can serve simultaneously. • Renc: video encoding, playback rate. • Renc/M: receive blocks from each of the M-1 target boxes. • We should limit the Kmax distinct sub-request: • Kmax = BupM / Renc

  23. Code-Based Data Placement • A box can serve is bounded by y, and y < M-1. • This scheme applies rateless coding. • This scheme divides each window into k source symbols, and generate • Ck = (M * (1 + e) / (y + 1)) / k coded symbols. • C is the expansion ratio, and C > 1. • For each window, the Ck symbols are evenly distributed to all M boxes such that each box keeps Ck/M distinct symbols. • A viewer can reconstruct a window of a movie by concurrently download any Cky/M distinct symbols from an arbitrary set of y boxes. • Unlike full striping, only y boxes are needed to download the video.

  24. Randomized Job Placement

  25. Randomized Job Placement • The decision where to place and serve the sub-request of a job. • Propose a distributed load balance strategy for the selection of serving peers. • The strategy we consider for initial job placement is as follow:

  26. Randomized Job Placement • When a download request is generated, d distinct boxes are randomly chosen from the overall M boxes. • The load, measured in terms of fair bandwidth share that a new job would get, is measured on all probed boxes. • Finally, sub-request are placed on the y least loaded boxes among the d probed boxes. • Provided the each of the y sub-request gets a sufficiently large fair bandwidth share. • If any of the loaded boxes cannot guarantee such a fair share, then the request is dropped.

  27. Conclusion

  28. Conclusion • We proposed a P2P approach, and show the theoretical upper performance bounds that are achieved if all resource of all peers are perfectly pooled. • However, perfect pooling in practice is not feasible. • Therefore, we proposed a randomized job placement algorithm. • We plan to do a more systematic analysis of placement schemes that take into account movie popularity.

More Related