1 / 28

Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)

SABRE : A client based technique for mitigating the buffer bloat effect of adaptive video flows. Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco). Large buffers increase queuing delays and also delays loss events. Significantly high queuing delays from TCP & large buffers.

huyen
Download Presentation

Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SABRE: A client based technique for mitigating the buffer bloat effect of adaptive video flows Ahmed Mansy, Mostafa Ammar(Georgia Tech)Bill Ver Steeg(Cisco)

  2. Large buffers increase queuing delays and also delays loss events Significantly high queuing delays from TCP & large buffers What is buffer bloat? Bottleneck = C bps Client Server RTT • TCP sender tries to fill the pipe by increasing the sender window (cwnd) • Ideally, cwnd should grow to BDP = C x RTT • TCP uses packet loss to detect congestion, and then it reduces its rate

  3. Manifest DASH: Dynamic Adaptive Streaming over HTTP DASH client HTTP server 350kbps 600kbps 900kbps Buffer 100% 1200kbps Download rate Video is split into short segments Time Initial buffering phase S. Akhshabi et al, “An experimental evaluation of rate-adaptation algorithms in adaptive streaming over HTTP”, MMSys’ 11 Steady state (On/Off)

  4. Does DASH cause buffer bloat? Problem description DASH VoIP Will the quality of VoIP calls get affected by DASH flows? And if yes, how can we solve this problem?

  5. Our approach • In order to answer the first two questions • We perform experiments on a testbed in the lab to measure the buffer bloat effect of DASH flows • We developed a scheme SABRE: Smooth Adaptive BitRatE to mitigate this problem • We use the testbed to evaluate our solution

  6. Adaptive HTTP video flows have a significant effect on VoIP traffic Measuring the buffer bloat effect OTT VoIP traffic UDP traffic: 80kbps, pkt=150bytes iPerf client iPerf server RTT 100ms 1Gbps 6Mbps (DSL) Bottleneck emulator Tail-drop: 256 packets HTTP Video server DASH client 6

  7. TCP is bursty Understanding the problem – Why do we get large bursts? 6Mbps 1Gbps

  8. Smooth download driven by the client Possible solutions • Middlebox techniques • Active Queue Management (AQM) • RED, BLUE, CODEL, etc. • RED is on every router but hard to tune • Server techniques • Rate limiting at the server to reduce burst size • Our solution:

  9. Two data channels • In traditional DASH players: • while(true) recv • 1 and 2 are coupled Server Some hidden details Client Playout buffer DASH player recv 1 HTTP GET OS 2 Socket buffer

  10. Idea TCP can send a burst of min(rwnd, cwnd) Since we can not control cwnd, then control rwnd rwnd is a function of the empty space on the receiver socket buffer Server • Two objectives • Keep socket buffer almost full all the time • Not to starve the playout buffer Smooth download to eliminate bursts Socket buffer rwnd Client Playout buffer DASH player recv HTTP GET OS Socket buffer

  11. Keeping the socket buffer full -Controlling recv rate While(1) recv Rate Rate On On While(timer) recv Off Off T T HTTP GET HTTP GET Client Client Server Server Playout Socket Socket Playout GET S1 GET S1 S1 S1 Off GET S2 Bursty GET S2 S2 Empty socket buffer S2 Off

  12. Socket buffer is always full, rwnd is small HTTP Pipelining # segments = 1 + Keeping the socket buffer full Socket buffer size #Segments = 1 + Segment size Client Client Server Socket Playout Server Socket Playout GET S1, S2 GET S1 S1 S1 S1 Off GET S3 S2 GET S2 S2 S2 GET S4 S3 Off

  13. Still one more problem • Socket buffer level drops temporarily when the available bandwidth drops • This results in larger values of rwnd • Can lead to large bursts and hence delay spikes • Continuous monitoring of the socket buffer level can help Socket buffer Available BW Video bitrate

  14. Experimental results OTT VoIP traffic UDP traffic: 80kbps, pkt=150bytes iPerf client iPerf server RTT 100ms 1Gbps 6Mbps (DSL) Bottleneck emulator Tail-drop: 256 packets HTTP Video server DASH client We implemented SABRE in the VLC DASH player

  15. SABRE On/Off OnOff: delay > 200ms about 40% of the time SABRE: delay < 50ms for 100% of the time Single DASH flow - constant available bandwidth

  16. Server Video adaptation: how does SABRE react to variable bandwidth? Client Playout buffer DASH player recv HTTP GET OS Socket buffer Players tries to up-shift to a higher bitrate, but can’t sustain it Socket buffer is full Rate Available BW Video bitrate Socket buffer gets grained, reduce recv rate, down-shift to a lower bitrate Player can support this bitrate, shoot for a higher one Socket buffer is full, can not estimate the available BW Time

  17. Single DASH Flow –variable available bandwidth Rate 6Mbps 3Mbps T=180 T=380 Time (sec) SABRE On/Off

  18. Two On/Off clients Two SABRE clients Two clients C1 Server C2

  19. Summary • The On/Off behavior of adaptive video players can have a significant buffer bloat effect • We designed and implemented a client based technique to mitigate this problem • A single On/Off client significantly increases queuing delays • Future work: • Improve SABRE adaptation logic for the case of a mix of On/Off and SABRE clients • Investigate DASH-aware middlebox and server based techniques

  20. Thank you! Questions?

  21. Backup slides

  22. Once the burst is on the wire, not much can be done! How can we eliminate large bursts? Random Early Detection:Can RED help? P=1 Loss probability P=0 Avg queue size min max

  23. SABRE Single DASH Flow -constant available bandwidth

  24. SABRE On/Off OnOff: delay > 200ms about 40% of the time SABRE: delay < 50ms for 100% of the time Single DASH flow - constant available bandwidth

  25. Single DASH Flow –variable available bandwidth Rate 6Mbps 3Mbps T=180 T=380 Time (sec) SABRE On/Off

  26. SABRE On/Off Single ABR Flow –variable available bandwidth

  27. At least one OnOff DASH client significantly increases queuing delays Two clients

  28. Two clients

More Related