1 / 60

Using P2P Technologies for Video on Demand (VoD)

Using P2P Technologies for Video on Demand (VoD). Limor Gavish limorgav at tau.ac.il Yuval Meir wil at tau.ac.il Tel-Aviv University Based on: Cheng Huang, Jin Li, Keith W. Ross, "Can Internet Video-on-Demand Be Profitable," in Proc. of ACM SIGCOMM, August 2007

Download Presentation

Using P2P Technologies for Video on Demand (VoD)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using P2P Technologies for Video on Demand (VoD) Limor Gavish limorgav at tau.ac.il Yuval Meir wil at tau.ac.il Tel-Aviv University Based on: Cheng Huang, Jin Li, Keith W. Ross, "Can Internet Video-on-Demand BeProfitable," in Proc. of ACM SIGCOMM, August 2007 N. ParvezC. Williamson, AnirbanMahanti, NiklasCarlsson Analysis ofBitTorrent-like Protocols for On-Demand Stored Media Streaming, Sigmetrics 2008

  2. VoD - Introduction • VoD providers enable users to watch videos on-line without waiting for the entire file to download. • Examples: YouTube, MSN Video, Flicker, Yahoo Video.

  3. Traditional VoD System design • All users download the contents directly from the server (or a content distribution network).

  4. The Problem • Bandwidth is a significant expense for VoD providers. • For example, according to estimation, YouTube was paying about 1$Million/month for bandwidth alone as of 2007. • Demand is growing. • Providers want to increase video quality (and therefore BW).

  5. Suggested Solution • Peer assisted VoD

  6. Peer assisted VoD • There is still a server. • The peers that are viewing the video help in redistributing it. • Each MB uploaded from one peer to another means 1MB less the server has to upload.

  7. Comparing Users Demand and Upload Resources • All information gathered from a large scale trace on MSN video service from April to December 2006. • According to measurements of download bandwidths, the following information was gathered: • Providers may give incentives to users for bandwidth contribution, especially at peek hours.

  8. Peer Assistance May Help • Based on measurements, the day with the maximum traffic in April had bandwidth requirements from the server and total upload resources as described in the following figure: • Conclusion: peer-assisted VoD may perform well.

  9. Three possible modes of the system • The figure in the previous slide exhibits significantly more upload resources than peer demand. This is called surplus mode. • There are 3 possible modes: • Surplus Mode– total upload resources of peers greater than total demand. • Deficit Mode– total upload resources of peers less than total demand. • Balanced Mode– total upload resources of peers approx. as total demand.

  10. No-Prefetching policy (1) • Each user downloads at playback rate. • No pre-fetching for future needs. • Assume n users in the system. • The user that arrived first can only download from server. • User k can download from users 1,…,k-1, and from the server if it is not satisfied.

  11. No-Prefetching policy (2) • Let ui be the upload bandwidth of the ith user. • (u1,…,un) is the state of the system. • s(u1,…,un) is the rate required from server. • According to previous slide we get: • Where r is the video playback rate.

  12. No-Prefetching policy (3) • In surplus mode, we conclude that: • Server upload rate is close to the playback rate. (i.e negligible). • Additional users may be added without increasing server bandwidth. • Video quality may be increased without increasing server bandwidth, until reaching balanced mode.

  13. No-Prefetching policy (4) • In deficit mode: • When supply S is substantially less than demand D server rate almost equals D-S. • Server resources increase dramatically in the move from balanced mode to deficit mode. • In all cases, no-prefetching policy reduces server bandwidth rate.

  14. No-prefetching is not Optimal • Balanced mode is actually a dynamic equilibrium • The system fluctuates between deficit and surplus • No-prefetching is not optimal under these conditions – peer BW sits idle in the surplus phase, and server BW is consumed in the deficit phase.

  15. Prefetching • Server never sends prefetched contents. • Server only used to fulfill current demand. • User may drain his reservoir before requesting new data.

  16. Two Prefetching Schemes • Water-leveling • Try to equally distribute surplus capacity. • If there is a peer with less prefetched content, all peers channel their surplus bandwidth to him. • Greedy • Each user dedicates its surplus upload bandwidth to the next user. • Those policies are nearly optimal considering average server bandwidth (the greedy policy is slightly better).

  17. Simulation of the Greedy Policy • Based on data from MSN video trace • We consider three cases • All users watch the entire video without interactivity • Users may depart early, no interactivity • Both early departures and interactivity

  18. No Departures, No Interactivity (1) • The figure below compares performance of VoD with P2P for the most popular videos on April 2006:

  19. No Departures, No Interactivity (2) • The table below presents the results in the context of the 95 percentile rule • We observe that the greedy policy is close to the lower bound of server resources • N.P. is no prefetching

  20. 95 Percentile Rule • The average server upload bandwidth is measured every 5 minutes within the month. • The ISP charges according to the 95 percentile of these values. • We will use this for measuring the bandwidth cost for the service provider.

  21. No Departures, No Interactivity (3) • We observe that: • P2P reduces server rate dramatically • Server resources are barely needed, only in case of small no. of peers. • We can offer much higher quality without significantly more server resources • Peer assistance can be beneficial for both flash crowd (gold stream) and long lasting (silver stream)

  22. Early departures (1) • Duration of session may vary. • Especially when viewing longer videos. • The table below compares server rates for different system modes, for the silver stream • Bitrate scaling refers to the video playback rate

  23. Early departures (2) • We conclude that: • Even with early departures, peer assistance provides dramatic improvement • Prefetching provides improvement over no-prefetching, particularly in balanced mode.

  24. User Interactivity (1) • Popular among long videos • According to trace, 40% of over 30 minutes videos contained interactivity • A user may have holes in his buffer • Two possible approaches for analysis: • Conservative: User bandwidth is zero after interactivity – lower bound • Optimistic: Assume there are no holes – upper bound

  25. User Interactivity (2) • The below plot compares the approaches for the traffic on April 18th 2006 • We see that the loss of bandwidth due to interactivity is insignificant • Thus the results for early departures are also representative

  26. Summary of simulation • The savings using the 95 percentile rule: • Server bandwidth may be reduced by 97% at current quality • Alternatively, triple quality and trim server bandwidth by 37.6%

  27. P2P good for popular videos • 12,000 videos available on MSN on April 2006 • Rank by popularity and classify into 4 groups • Compare the 95 percentile of each group • Popular videos are a smaller fraction of bandwidth in P2P • Conclusion: P2P is especially beneficial for popular videos

  28. ISP Friendly P2P (1) • We maximized server bandwidth savings using P2P • This approach is costly for ISPs • Observations have showed that most P2P traffic crosses entity boundaries

  29. ISP Friendly P2P (2) • Extreme approach: constrain P2P traffic within entities • Increases server bandwidth, but still better than client-server VoD • Need to find balance between approaches

  30. Summary • We have showed the potential of P2P for saving server bandwidth costs • With / Without pre-fetching • The implications of user interactivity • We have discussed the implications on ISPs

  31. Bit-Torrent Protocols for VoD

  32. Introduction • As said earlier, P2P may be extremely beneficial for VoD • We would like to analyze the performance of P2P VoD in a server-less setting • We will try to modify the BitTorrent protocol to the constrains of VoD

  33. Piece Selection Policies (1) • Like in BitTorrent, we assume that a file is obtained in pieces • In the usual BitTorrent protocol, peers use a Rarest-First policy to ensure high piece diversity • Downloaders prefer pieces that are rare among the peers in the swarm

  34. Piece Selection Policies (2) • Is rarest-first policy efficient also for on demand streaming? • We will analyze the performance of the Rarest First policy, and compare it to strict in order piece selection policies. • Strict in order piece selection • Strict in order piece selection (FCFS)

  35. Mathematical Model (1) • In order to measure the performance of different piece selection policies, we construct a mathematical model • The system has a poisson behavior • Peers enter the system at rate l • download the entire file • become seeds at rate l • and depart after a constant time 1/m

  36. Mathematical Model (2) • Model Notations: • Target file divided into M pieces • File playback rate is r • Each peer has: • U upload connections • D download connections • x is the number of downloaders in the system • y is the number of seeds in the system • Downloaders enter the system at rate l • Download latency is T

  37. Mathematical Model (3) • Model Notations (continued): • Startup delay is t • Seeds reside in the system for 1/m time • C is the throughput per connection • Model Assumptions: • D>U • Demand is greater than supply: xD>(x+y)U • All peers are equal • Steady state – always same number of downloaders and seeds • Peers are cooperative • Peers download the entire file

  38. Rarest First Policy (1) • As explained, in rarest first peers prefer pieces that are rare among the swarm • Probability for a peer to obtain a successful connection

  39. Rarest First Policy (2) • Calculations show that download latency in Rarest First model is • Independent of the peer arrival rate • Near optimal – high utilization of upload bandwidth

  40. Rarest First Policy and sequential progress (1) • Sequential Progress = acquiring the initial pieces from the beginning of the • Sequential Progress is independent of download progress • Strict in order policies retrieve the pieces in order – ideal sequential progress

  41. Rarest First Policy and sequential progress (2) • Rarest first is like random piece selection – provides poor sequential progress

  42. Rarest First Policy and sequential progress (3) • The probability to download pieces 1 through j after having k pieces is • Thus the expected value of j is • Plotted in the figure from previous slide

  43. Rarest First Policy and sequential progress (4) • We conclude that: • Bad sequential progress • E[j] 1 only after retrieving half of file • After retrieving M-1 pieces j is expected at most half of the file • Startup delay • If the playback rate is r, the startup delay is • Startup delay gets worse as M increases

  44. Strict in Order Policy (1) • Peers request pieces in numerical order from connected peers • In each round peer issues D concurrent requests to “older” peers • A subset of these requests are satisfied in the round, unsatisfied requests are purged • Relationship are a-symmetric • An uploader that receives more than U requests chooses randomly

  45. Strict in Order Policy (2) • For a peer that has been in the system for time t, the probability to obtain a successful upload connection is: • For ease of presentation we will rewrite this formula with a new variable a • Numerical experiments show that a is typically in the range [1.09,1.25]

  46. Strict in Order Policy (3) • Further calculations show that the average download latency is: • Conclusion: The average download latency is almost double than rarest first

  47. Strict in Order policy – startup delay • e is the fraction of data that is allowed to arrive late • Then the startup delay is • It reaches maximum when t=T so • We can do better

  48. Strict in Order Policy (FCFS) (1) • Peers are queued until they are serviced • Fair progress, no starvation • Each peer is allowed D outstanding requests at any time • Probability for a peer to obtain a successful connection is independent of age • Exactly like rarest-first

  49. Strict in Order Policy (FCFS) (2) • Calculations show that download latency is • Like the latency of rarest-first – near optimal

  50. Strict in Order policy (FCFS) – startup delay (1) • Calculation bring us to the conclusion that startup delay is • It reaches maximum when t=T so • We conclude that: • In-Order (FCFS) achieves lowest startup delay

More Related