1 / 33

CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live Media Streaming

CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live Media Streaming. Jiangchuan Liu with Xinyan Zhang, Bo Li, and T.S.P.Yum Infocom 2005. Some Facts. DONet – D ata-driven O verlay Net work CoolStreaming – Co operative O ver l ay Streaming

elvis
Download Presentation

CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live Media Streaming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live Media Streaming Jiangchuan Liu with Xinyan Zhang, Bo Li, and T.S.P.Yum Infocom 2005

  2. Some Facts DONet– Data-driven Overlay Network CoolStreaming– Cooperative Overlay Streaming First release (CoolStreaming v0.9) • May 2004 Till March 2005 • Downloads: >100,000 • Average online users: 6,000 • Peak-time online users: 14,000 • Google entries (CoolStreaming): 5130

  3. Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work

  4. Motivation • Enable large-scale live broadcasting in the Internet environment • Capacity limitation • Streaming: 500Kbps, server outbound band: 100Mbps • 200 concurrent users only • Network heterogeneity • No QoS guarantee

  5. Client/Server: Poor scalability

  6. IP multicast: Limited deployment

  7. Collaborative Communications

  8. Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work

  9. Related Solutions • Content distribution networks • Expensive • Not quite scalable for a large number of audiences • Self-organized overlay networks • Application layer multicast • Peer-to-peer communications

  10. Related Solutions • Content distribution networks • Expensive • Live streaming (?) • Self-organized overlay networks • Application layer multicast • Peer-to-peer communications

  11. Application Layer Multicast • Issue: Structure construction • Tree • NICE, CoopNet, SpreadIt, ZIGZAG • Mesh • Narada and its extension • Multi-tree • SplitStream

  12. Application Layer Multicast (cont’d) • Issue: Node dynamics • Structure maintenance • Passive/proactive repairing algorithms • Advanced coding • PALS (layered coding) • CoopNet (multiple description coding)

  13. Gossip-based Dissemination • Gossip • Iteration • Sends a new message to a random set of nodes • Each node does similarly in the next round • Pros: Simple, robust • Cons: Redundancy, delay • Related • Peer-to-peer on-demand streaming

  14. Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work

  15. Data-driven Overlay (DONet) • Target • Live media broadcasting • No IP multicast support • Core operations • Every node periodically exchanges data availability information with a set of partners • Then retrieves unavailable data from one or more partners, or supplies available data to partners

  16. Features of DONet • Easy to implement • no need to construct and maintain a complex global structure • Efficient • data forwarding is dynamically determined according to data availability, not restricted by specific directions • Robust and resilient • adaptive and quick switching among multi-suppliers

  17. Key Modules • Membership manager • mCache – partial overlay view • Update by gossip • Partnership manager • Random selection • Partner refinement • Transmission Scheduler

  18. Transmission Scheduling Problem: From which partner to fetch which data segment ? • Constraints • Data availability • Playback deadline • Heterogeneous partner bandwidth

  19. Scheduling algorithm • Variation of Parallel machine scheduling • NP-hard • Heuristic • Message exchanged • Window-based buffer map (BM): Data availability • Segment request (piggyback by BM) • Less suppliers first • Multi-supplier: Highest bandwidth within deadline first • Simpler algorithm in current implementation • Network coding ?

  20. Analysis on DONet • Coverage ratio for distance k • E.g. 95% nodes are covered in 6 hops for M=4 • Average distance O(logN) • DONet vs Tree-based overlay • Much lower outage probability

  21. Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work

  22. PlanetLab Experiments • Distributed experimental system • DONet Module • Console and automation • Command dispatching and report collection • Caveats • Scalability • Reproducibility • Representability

  23. Geographical Node Distribution May 24, 2004 # of Active Node: 200-300

  24. Planet-Lab Result • Data continuity, 200 nodes, 500 kbps streaming

  25. Control overhead

  26. Implementation: CoolStreaming • First release: May 30, 2004 • Source code: 2000-line Python • Programming time: • PlanetLab prototype: 2 weeks • Export from prototype: 2 weeks • Support formats: • Real Video/Windows Media • Platform/media independent • Scale and capacity • Total downloads: • Peak time: 14000 concurrent users • Streaming rate: 450-700kbps

  27. User Distribution (June 2004) • Heterogeneous network environment • LAN, DSL, CABLE...

  28. Online Statistics (Jun 21, 2004) Average Packet Loss around 1% - 5%

  29. Observations • Current Internet has enough available band to support TV-quality streaming (>450Kbps) • Bottleneck: server, end-to-end bandwidth • Larger data-driven overlay  better streaming quality • Capacity amplification

  30. Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work

  31. Future of DONet/Coolstreaming • Content • Solution: DONet/Coolstreaming as a capacity amplifier between content provider and clients • Virtually part of network infrastructure • Enhancement • Scheduling algorithm • Simplified version • Network coding • Transport protocol • TCP (?)

  32. Future of DONet/Coolstreaming • Enhancement (cont’d) • User interface • Combined with caching • Combined with CDN • Provide world-wide reliable media streaming service • On-demand streaming

  33. Q & A Thanks

More Related