1 / 26

Delivering Streaming Content on the Internet

Delivering Streaming Content on the Internet. Ramesh K. Sitaraman Principal Architect Akamai Technologies. Differences for Streaming. Player rather than browser Streaming server rather than web server Feed from live event must be distributed to server

ataret
Download Presentation

Delivering Streaming Content on the Internet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Delivering Streaming Content on the Internet Ramesh K. Sitaraman Principal Architect Akamai Technologies

  2. Differences for Streaming • Player rather than browser • Streaming server rather than web server • Feed from live event must be distributed to server • Sufficient bandwidth must be consistently available

  3. Typical Streaming Experience? • Poor resolution • Small size • Jerky video • Poor sound • Frequent disconnects

  4. Delivering Streams the Conventional Way Live Events Internet Encoder Production Signal Acquisition Media Player or client Streaming Server On-Demand Clips On-Demand Clips END-USER CONTENT PROVIDER

  5. What FFSSM offers Content Providers • Reliability • No single point of failure. Automatic failover • Scalability • Performance • Better stream quality • Content Control • Authentication, Content freshness • Reporting • Viewing profiles, customized log analysis, monitoring, alerts.

  6. Control Channel Media Player or Client Feedback Channel Server Data Channel Stream Quality: What to Measure? • Metrics should capture important elements of user experience. • Focused, simple, universal, and objective.

  7. Start Up Play Freeze Play Freeze Play Our Streaming Quality Metrics • Failure Rate: Rate at which streams fail to play. • Startup Time: Time to start playing after user hits “play”. • Thinning and Loss Rate: Reduction in “playback bandwidth”. • Playback Bwth (PB) = Useful Bits/Stream Time, where Useless = Thinned, or Unrecoverably lost, or Arrives late. • Thinning & Loss Rate = (Ideal PB – Actual PB)/Ideal PB. • Interruptions: Rebuffers per min, Rebuffer Time per min.

  8. Akamai Streaming Agents • Simulated end-users in 100+ diverse internet locations playing streams and reporting quality. • Technical challenges. • Extracting accurate and relevant perf info from proprietary media players is hard. • Measurement system should not bias what is measured.

  9. Stream Failure Percent Startup Time (ms) Actual Playback Bwth (b/s) Ideal Playback Bwth (b/s) Thinning & Loss Percent Rebuffers per min Rebuffer Time (sec per min) Number of Tests CP-Akamized 0.2% 4,932 77,763 78,938 1.48% 0.01 0.10 2046 CP-NonAk 8.2% 10,413 60,101 78,938 23.8% 0.20 2.50 2046 Measurement Example

  10. TRANSPORT of data CDN Technology: Three Subsystems ContentProvider EndUsers Akamai Servers at Network Edge STORAGE of media NAP NAP MAPPING end-users

  11. 1 2 3 4 5 6 Live Streaming: Architectural Overview Satellite Uplink Satellite Downlink Encoder Entry Point rtsp://a212.r.akareal.net/ live/D/1/550/v1/reflector:21390 Reflector Regions

  12. Mapping: Finding the Optimal Server • Very large scale. • Millions of end-users, tens of thousands of servers, thousands of networks, thousands of customers. • Fast reaction time. • Provide DNS resolution in milli-seconds. • Monitor internet weather and react to internet problems in seconds. • Respond within seconds to rapid changes in load (local and global load balancing). • Highly fault tolerant. • Resilient to multiple component failures without ever disrupting service.

  13. Mapping Considerations • Liveness • Network proximity to end-user • Understand how measurable network parameters (latency, loss, oo-rate,..) affect stream quality metrics. • Current load with respect to capacity • Model server load and capacity and use it to load balance • Content locality : : Which server can provide the best stream quality for the end-user?

  14. Modeling Server Load and Capacity Simple models such raw machine BW do not work! • Capacity = multi-dimensional polytope. • Dimensions include bandwidth, #conns, #distinct streams

  15. 1 X 34 1 2 3 4 x 123 4 X X X 1 2 3 4 Data Transport I Reflectors End-to-End Metrics: Minimize loss, “lateness”, cost. • Multi-Path: Detect loss, pull more copies, reconstruct clean copy. • Information dispersal across multiple paths to reduce overhead. • Example, Odd-Even-XOR, Reed-Solomon, etc. Entry Point Encoder Region

  16. x 123 4 X X X 1 2 3 4 Data Transport II Reflectors • Fast failover to alternate path without disrupting stream to the end-user. • Link-level recovery: Retransmits and Forward Error Correction. 12 34 1 2 3 4 Entry Point Encoder Region

  17. Link-Level Recovery: FEC • Loss is bursty on the internet, so simple parity performs poorly. Less than 50% loss recovery even if cost overhead is 33%! More sophisticated FEC schemes work better, e.g., interleaved parity. • Link loss = 0.6%

  18. Link-Level Recovery: Retransmits • Good bang for the buck, esp. on low latency links. Low bwth overhead – it is proportional to link loss rate. High loss recovery. Acceptable time to recovery.

  19. Fault-Tolerant Entrypoints Reflectors No single point of failure! • Customer sends backup stream through another entrypoint (metafiles). • Intra-Region Failover. Alternate machine picks up for failed machine. • Inter-Region Failover. Stream automatically routed to different entrypoint. Entry Point Encoder Region

  20. 4 1 3 2 (if not cached) On-Demand Streaming: Architectural Overview Customer uploads to storage or uses own web server Storage Site A Storage Site B

  21. Storage for on-demand media • Multiple locations with huge storage capacity (10’s Terabytes). • Customer uploads to the “optimal” storage site; Content automatically replicated to multiple storage sites for fault-tolerance. REPLICATION Storage Site A Storage Site B

  22. Data Transport Issues Storage Site A Intelligent fetching and edge caching. • Cache segments rather than entire files. • Fetch data segments from “optimal” storage site. • Prefetch in advance of use. • Maximize cache hit rates using replacement policies optimized for stream usage characteristics. Storage Site B

  23. Data Caching • Intelligent edge caching. • Cache segments of media files rather than the whole. • Maximize cache hit rates using replacement policies optimized for stream usage characteristics.

  24. Pre-Burst Data Pre-Burst Data Pre-Burst Data Prebursting: Enhancing Stream Quality Pre-Burst Data Entry Point Encoder Reflector Region • Key Idea: Each node keeps 20 seconds of history for every subscribed stream. If A subscribes to B for a stream, B initially bursts stored history to A, and then streams at required bitrate. • Drastically reduces startup time, thinning&loss, and rebuffering.

  25. Stream Failure Percent Startup Time (ms) Actual Playback Bwth (b/s) Ideal Playback Bwth (b/s) Thinning & Loss Percent Rebuffers per min Rebuffer Time (sec per min) Number of Tests Stream without Preburst 0.2% 9,824 76,640 78,938 2.9% 0.227 1.60 2520 Stream with Preburst 0.1% 5,467 77,783 78,938 1.46% 0.006 0.06 2520 Benefits of Prebursting • Significant benefits! But, stream and format dependent.

More Related