1 / 37

P2PMoD Peer-to-Peer Movie-on-Demand

P2PMoD Peer-to-Peer Movie-on-Demand. GCH1 Group members Cheung Chui Ying Lui Cheuk Pan Wong Long Sing Supervised by Professor Gary Chan. Presentation flow. Introduction System Design Results Conclusion Q&A and Demo. Technical Challenges. Asynchronized Play Time

amina
Download Presentation

P2PMoD Peer-to-Peer Movie-on-Demand

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. P2PMoDPeer-to-PeerMovie-on-Demand GCH1 Group members Cheung Chui Ying Lui Cheuk Pan Wong Long Sing Supervised by Professor Gary Chan

  2. Presentation flow • Introduction • System Design • Results • Conclusion • Q&A and Demo

  3. Technical Challenges • Asynchronized Play Time • Movie-on-Demand is not TV Program Broadcast • Viewers start watching at different time • Peer Dynamics • Network topology might changes over time • Viewers might go on and off • Interactivity • Support for pause and jump

  4. Related Work • Traditional Server-to-Client • Server loading grows linearly  Not scalable • Multicasting • Special network support needed • Interactivity is not supported • BitTorrent • Unpredictable download order  Cannot start before you finish downloading • Interactivity is not supported

  5. What is P2PMoD? It is a peer-to-peer (P2P) based interactivemovie streaming system that brings movies to your home • Scalable • Low server bandwidth requirement • Decentralized control • Support for user interactivity • Resilience to node/link failure • Short playback delay

  6. Why is P2PMoD important? • Overcome the limitation of the server-to-client movie streaming architecture • Shape the future of movie watching experiences • Commercial deployment: Help strike on illegal movie downloading by BT

  7. Director System Architecture: PRIME GUI Control RTSP Off-the-shelf Media Player RTSP Server RTP Internal Logic Statistic DHT Buffering Communication Buffer

  8. Director Can use any RTSP-compatible media player RTSP Server RFC 2326 RTSP Server RTSP Protocol Commands RTP Internal Logic Movie data Off-the-shelf Media Player

  9. Packetized Movie Stream Movie Packetizer Movie in compatible format 0 1 2 frames RTP packets RTP Packetizer • Can play on any RTP-compatible media player • Abstraction • No change is needed for PRIME to support different movie format RFC 2250 0ms  0 1000ms  164452 2000ms  299501 Index file

  10. Director Backend • Responsible for the actual movie data retrieval process • Provide programming interface for stream management and interactivity control • Implementation Goal • Scalable and Fastcollaboration between peers • EfficientMinimize control communication overhead

  11. Director Backend Implementation • Use the concept of virtual time slot to find potential parents • Use a DHT to achieve decentralized control communication

  12. slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 Moving virtual timeslot Movie Length 00:03:00 00:00:00 00:06:00 00:09:00 00:12:00 00:18:00 00:15:00 00:21:00 00:24:00 00:00:00 Start 00:42:39 End Time since Publishing: • The time boundary keeps advancing along with the real time. • Peers will stay on the same slot once started playing, unless user seeks to another position. • Peers in the same or earlier virtual time slot can help us in streaming. • How to identify these potential parents?DHT comes into play

  13. DHT Key • We construct <movie hash, virtual time slot, random number> as the DHT key <titanic, 1, 91> <titanic, 2, 34> <mi3, 6, 99> <mi3, 5, 65> <titanic, 2, 72> <mi3, 1, 2> <titanic, 3, 23> <matrix, 4, 71> <matrix, 2, 2> <matrix, 3, 82> <matrix, 3, 12>

  14. How to retrieve the data? • Implemented 2 versions of director • Both utilized FreePastry as the DHT • Initial version • Movie data are carried on Scribe • Scribe: an application-level multicast infrastructure built on top of FreePastry • Revised version • Out-of-band transfer • Employ a multiple parents scheme to transfer movie data

  15. Director: Initial Version Publisher By the nature of DHT, usually it takes some hops for node A to contact node B. But that also means, sometimes it have to go through other off-topic nodes. Clients subscribe the slot they interested in. i.e. Slots covered by pre-buffer range Slot 6 Members One node would be the root determined by it’s ID. Slot 7 Members Slot 7 (00:18:00) Topic Root By the nature of DHT, Slot root nodes are all around the ring, uniformly distributed. Slot 6 (00:15:00) Topic Root

  16. DHT IP of potential parents Myself Parents 1 Parents 2 Parents 3 Movie data Director: Revised Version • Direct data connection contrary to multi-hops transfer overlay in Scribe • less likely to have problem induced by link failure • Faster, due to reduced IP and processing overhead • If the parents jump, the child can still stream from other parents smoothly – unaffected • Peer could schedule frame request intelligently to achieve load balancing

  17. Finding Parents • Recall that each nodes carry an IP list of its immediate N neighbors. • By searching/routing the message to the <Movie and Slot>, the node could returns us a list of potential parents.

  18. Director: Scheduling • With the use of buffer map, that shows the frame availability of one’s node… • Continuity: Fetch the frames with the closest playback deadline first • The streaming is smooth • Load Sharing: Fetch the frameswhich are possessed by the least number of nodes first • To obtain the rare pieces for redistribution • To share the load for the peers holding these pieces • Efficiency: Stream from multiple parents at the same time

  19. Graphical User Interface

  20. Results • Deployment of P2PMoD on 71 nodes in PlanetLab • Configuration: 1 server and 70 peers • 40KBps stream for 10 minutes • Measurement Metrics: • User Experience • Efficiency

  21. Results– User Experience • Measures Continuity • Playback delay • Time required to start the stream • Stall Occurrences • Number of times the stream pauses to buffer more data • Stall Ratio • Ratio of paused time to streaming time

  22. Results – User Experience

  23. Results– User Experience

  24. Results – User Experience

  25. Results– User Experience • Playback delay • Over 90% has < 6 seconds delay • Stall Occurrences • Over 90% has < 2 occurrences • Stall Ratio • Over 90% has < 3% of total time

  26. Results – Efficiency • Peer • Overhead caused by control messages • Server • Bandwidth required

  27. Result–Efficiency

  28. Results –Efficiency • Peer • Ratio of stream data to all data input: 90% • Server • Data output rate: 275KBps • Output bandwidth equivalent to 7 streams • Use 10% bandwidth of traditional server-client model

  29. Practical Issue • Network Traversal • Router and NAT is common • Until IPv6 lands… • Universal Plug and Play, Hole Punching • RTSP and RTP compatibility • Glitches are common and expected

  30. Network Positioning • GNP, Vivaldi could potentially be used • Map network latency to Rn coordinate • Even with ninf, never perfect due to triangular inequality violation • GNP: Landmark selection and reselection • Vivaldi: No fixed reference, coordinates are updated continuously (spinning) • Ping time does not reflect transfer rate

  31. Future Work • Fixed data cache instead of moving slot • Parents interactivity would not affect availability • Searching / refreshing next slot parents could be slow • Frames popularity • More movie formats, handheld devices to be supported • Error correction code

  32. Conclusion • Peer-to-peer is the way to go, to make use of users’ increasing bandwidth and reducing server resource • PRIME, a working P2P MoD implementation • Workload reduced by adopting open standard and using off-the-shelf player

  33. Thank You • Questions? • Demonstration

  34. Pastry: Ring 0x0002 0x22AF 0xDF41 0xCB95 0x3529 0xA125 0x591A 0x9A92 0x62C8 0x8392 0x7F52

  35. Pastry: Routing Knowledge Leaf Set N immediate neighboring nodes 0x0002 0x22AF 0xDF41 0xCB95 0x3529 0xA125 0x591A Routing Table 0x9A92 0x62C8 0x8392 0x7F52

  36. Pastry: Object Storage Object is duplicated to N immediate neighboring nodes 0x0002 0x22AF 0xDF41 0xCB95 0x3529 0x3530 0xA125 0x591A Routing Table 0x9A92 0x62C8 0x8392 0x7F52

  37. PRIME? • PRIME stands for Peer-to-peerInteractive Media-on-demand

More Related