1 / 22

Michael K. Bradshaw, Bing Wang,

Periodic Broadcast and Patching Services - Implementation, Measurement, and Analysis in an Internet Streaming Video Testbed. Michael K. Bradshaw, Bing Wang, Subhabrata Sen , Lixin Gao, Jim Kurose, Prashant Shenoy, and Don Towsley ACM Multimedia 2001. Introduction. Multimedia streaming :

niles
Download Presentation

Michael K. Bradshaw, Bing Wang,

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Periodic Broadcast and Patching Services - Implementation, Measurement, and Analysis in an Internet Streaming Video Testbed Michael K. Bradshaw, Bing Wang, Subhabrata Sen , Lixin Gao, Jim Kurose, Prashant Shenoy, and Don Towsley ACM Multimedia 2001

  2. Introduction • Multimedia streaming : significant loads place on both server and network resources. • Multicast approaches : • Batching • Periodic Broadcast • Patching • Issues : control/signaling overhead, the interaction between disk and CPU scheduling, multicast join/leave times

  3. Batching • Server batches requests that arrive close together in time and multicast the stream to the set of batched clients. • A drawback is that client playback latency increase with an increasing amount of client request aggregation.

  4. Periodic Broadcast • Server divides a video object into multiple segments, and continuously broadcasts segments over a set of multicast addresses. • Earlier portions are broadcast more frequently than later ones to limit playback startup latency. • Clients simultaneously listen to multiple addresses, storing future segments for later playback.

  5. Patching (stream tapping) • Server streams the entire video sequentially to the very first client. • Client-side workahead buffering is used to allow a later-arriving client to receive its future playback data by listening to an existing ongoing transmission of the same video. • Server need only additionally transmit those earlier frames that were missed by the later-arriving client.

  6. Server and Client Architecture

  7. Server Architecture • Server Control Engine (SCE) • One listener thread • A pool of free scheduler threads • One transmission schedule per video • Server Data Engine (SDE) • A global buffer cache manager • Disk thread (DT) : round-lengthδ • Network thread (NT) : round-lengthτ

  8. Schedule Data Structure

  9. Signaling between Server and Client

  10. Testbed (1) • 100 Mbps switched Ethernet LAN • Three machines (server, workload generator and client) with Pentium-II 400 MHz CPU, 400 MB RAM, running Linux OS • Workload Generator generates a background load of client requests in a Poisson manner and logs the timing information for the request to be served

  11. Testbed (2) • Periodic broadcast : • L. Gao, J. Kurose, and D. Towsley.Efficient schemes for broadcasting popular videos (Greedy Disk-conserving Broadcasting segmentation scheme) • l-GDB : the initial segment is lseconds • Subsequent segments are of size 2i-1l where 1 < i < [log2L]

  12. Testbed (3)

  13. Testbed (4) • Patching algorithm : • L. Gao and D. Towsley.Supplying instantaneous video-on-demand services using controlled multicast. (Threshold-based Controlled Multicast scheme) • When client arrival rate for a video is Poisson with parameterλand the length of a video is L seconds, the threshold is chosen to be (sqrt(2Lλ+1)-1)/λ seconds.

  14. Performance Metrics • Server Side : • System Read Load (SRL) • Server Network Throughput (SNT) • Deadline Conformance Percentage (DCP) • Client Side : • Client Frame Interarrival Time (CFIT) • Reception Schedule Latency (RSL)

  15. Catching Implications (1) PB :

  16. Catching Implications (2) Patching :

  17. Catching Implications (3) SRL for patching and 10-GDB with LFU caching

  18. Component Benchmarks

  19. End-End Performance (1) PB : Client Frame Interarrival Time (CFIT) histogram under 3-GDB, 10-GDB, and 30-GDB at 600 requests per minute.

  20. End-End Performance (2) Patching :

  21. Scheduling Among Videos

  22. Conclusions • Network bandwidth, rather than server resources, is likely to be the bottleneck. • PB : 600 requests per minute • Patching : fully loading a 100 Mb network • An initial client startup delay of less than 1.5 sec is sufficient to handle startup signaling and absorb data jitter. • Dramatic reductions can be gained via application-level data caching using LFU replacement policy.

More Related