1 / 42

Transport of Real-Time Traffic over the Internet

Transport of Real-Time Traffic over the Internet. Bernd Girod Information Systems Laboratory Stanford University. THE MEANING OF FREE SPEECH

beth
Download Presentation

Transport of Real-Time Traffic over the Internet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Transport of Real-Time Trafficover the Internet Bernd Girod Information Systems Laboratory Stanford University

  2. THE MEANING OF FREE SPEECH The acquisition by eBay of Skype is a helpful reminder to the world's trillion-dollar telecoms industry that all phone calls will eventually be free . . . . . . Ultimately—perhaps by 2010—voice may become a free internet application, with operators making money from related internet applications like IPTV . . . [Economist, September 2005]

  3. IPTV Rollout Verizon 10M householdsby 2009 IPTV SBC18M households by 2007 [IEEE Spectrum, Jan. 2005]

  4. Why Is Real-Time Transport Hard? Internet is a best-effort network . . . CongestionInsufficient rate to communicate Packet loss Impairs perceptual quality Delay Impairs interactivity of services; Telephony: one way delay < 150 ms [ITU-T Rec. G.114] Delay jitter Obstructs continuous media playout

  5. Outline of the Talk • QoS vs. best effort • Resource allocation for IPTV • Rate-distortion optimized streaming • Multi-path routing • P2P multicasting of live video streams

  6. How 1B Users Share the Internet Rate r TCP Throughput maximum transfer unit Growing congestion data rate packet loss rate p round trip time 0.0001 0.001 0.01 0.1 [Mahdavi, Floyd, 1997] [Floyd, Handley, Padhye, Widmer, 2000]

  7. Reservation-ism Voice and video need guaranteed QoS (bandwidth, loss, delay) Implement admission control: “Busy tone” when network is full Best effort is fine for data applications Best Effort-ism Best Effort good enough for all applications Real-time applications can be made adaptive to cope with any level of service Overprovisioning always solves the problem, and it’s cheaper than QoS guarantees QoS vs. Best Effort

  8. Simple Model of A Shared Link • Link of capacity C is shared among k flows • Fair sharing: each flow uses data rate C/k • Homogeneous flows with same utility function u(.) • Total utility C [Breslau, Shenker, 1998]

  9. Rigid Applications u • Utility u=0 below of minimum bit-rate B • Maximum total utility U=k* is achieved by admitting at most k* flows 1 C/k B [Breslau, Shenker, 1998]

  10. Rigid Applications (cont.) • Expected loss in total utility w/o admission control • Gap DU is substantial when number of admissable flows k* is small • Gap DU usually disappears with growing capacity COverprovisioning can solve the problem! [Breslau, Shenker, 1998]

  11. Elastic Applications • Elastic applications: utility function u(k), such that total utility U(k)=ku(C/k) increases with k • Example: u(C/k)=1-aC/k • All flows should be admitted: best effort! u C/k

  12. H.264 video coding for 2 different testsequences Video is elastic application Rate must be adapted to network throughput How to achieve rate control for stored content or multicasting? Utility function depends on content: should use unequal rate allocation Video Compression Good picture quality Foreman Mobile Bad picture quality

  13. Different Utility Functions • Example: uk(rk)=1-akrk • With rk>=0  Karush-Kuhn-Tucker conditions (“reverse water-filling”) • Better than utility-oblivious “fair” sharing Equal-slope “Pareto condition” uk Vilfredo Pareto 1848-1923 rk

  14. Distribution of IPTV over WLAN 5 Mbps 11 Mbps 2 Mbps Home Media Gateway [courtesy: van Beek, 2004]

  15. Video Streaming Over Shared Channel Transcoder Decoder 0 0 Transcoder Decoder 1 1 Receiver Transcoder Decoder 2 (Multi-Channel) 2 Transcoder Decoder 3 3 Controller [Kalman, van Beek, Girod 2005]

  16. Tx Backlog for 4 Video Streams 85% WLAN Utilization [Kalman, van Beek, Girod 2005]

  17. Streaming of Stored Content Media files are already compressed: How can we nevertheless adapt to network? 100s to 1000ssimultaneous streams DSL Cable wireless Network Server Client

  18. Not All Packets are Equally Important I I I B P B P B P I B P B P B P … … … A A …

  19. Not All Packets are Equally Important I I I B P B P B P I B P B P B P … … … A A …

  20. Distortion-Aware Packet Dropping Good Picture quality Distortionaware Packet dropping No retransmissions QCIF Carphone I-P-P-P-P-P- . . . Bad picture quality Oblivious Percentage of Packets Retained [%] [Chakareski, Girod, ICME 2004]

  21. Repeat request Repeat request Repeat request Rate-distortion preamble Video data Request stream Packetschedule Rate-DistortionOptimized (RaDiO) Streaming “Decide which packets to send (and when) to maximize picture quality while not exceeding an average rate” [2001] Server Network Client

  22. A Brief History of Media Streaming • Media streaming w/o congestion avoidance: “reckless driving” [1995] • TCP-friendly rate control: “Limit average rate for fair sharing with TCP” [1997] • Rate-distortion optimized packet scheduling (RaDiO): “Decide which packets to send (and when) to maximize picture quality while not exceeding an average rate” [2001] • Congestion-distortion-optimized scheduling/routing (CoDiO): “Decide which packets to send (and when) to maximize picture quality while minimizing network congestion.” [2004]

  23. Congestion D [seconds] Rate R Congestion vs. Rate • Congestion: queuing delay that packets experience • weighted by size of the packet • averaged over all packets in the network • Congestion increases nonlinearly with link bit-rate Rmax

  24. Self congestioncauses late loss Video Distortion with SelfCongestion Good Picture quality Bad picture quality Bit-Rate [kbps]

  25. Streaming with Last Hop Bottleneck Random cross traffic High bandwidth links Video traffic Low bandwidth last hop Acknowledgments

  26. Delay distribution • Overall delay distribution • Queue length determines delay of last hop pdf delay C

  27. Comparison RaDiO vs. CoDiO 50 % PSNR [dB] PSNR [dB] Rate [kbps] End-to-end delay [ms] Simulations using H.263+ Rate : 10 fps Sequence : Foreman (32kbps,32kbps) Sequence length : 60s Playout deadline : 600ms

  28. How To Avoid Traffic Jams? • Avoid congested times . . . Congestion-aware packet scheduling • Avoid congested roads . . . Congestion-aware routing

  29. 31 kbps 22 16kbps 45 15 2 77 24 8 2 7 5 35 18 24 43 8 64 24 6 23 23 9 Multipath Routing for Minimum Congestion • Mesh network, fully connected • Streaming 100 kbps from Node 1 to Node 5 • Random cross traffic

  30. Multipath Video Streaming Good Picture quality 6 dB Sequence : Foreman QCIF, 250 frames, 30 fps Codec: H.26L TML 8.5 Playout deadline : 500 ms Packetization : 1 frame/packet Traffic model: CBR No. of realizations: 400 Bad picture quality Bit-Rate [kbps]

  31. Multipath Video Streaming 3 paths 187 kbps, PSNR 36.2 dB 1 path 80 kbps, PSNR 32.5 dB

  32. Example AOL webcast of Live 8 concert July 2, 2005 Distribution of Live Streams via “Pseudo-Multicast” Content delivery network Media server . . . Splitter servers 1500 servers in 90 locations 50 Gbps . . . . . . . . . . . . . . . 175,000 simultaneous viewers 8M unique viewers

  33. Example AOL webcast of Live 8 concert July 2, 2005 Content delivery network Media server . . . Splitter servers 1500 servers in 90 locations 50 Gbps P2P live multicast . . . . . . . . . . . . . . . Distribution of Live Streams via “Pseudo-Multicast” 300 kbps 175,000 simultaneous viewers 8M unique viewers

  34. P2P Multicast over 1 Tree

  35. P2P Multicast over 2 Trees

  36. P2P Ungraceful Parent Leave New parent is selected Parent leave is detected Retransmissions requested Parent of yellow tree is down Yellow tree is recovered 3 trees Hello, Yellow Tree Parent?

  37. Experimental Set-up • Network/protocol simulation in ns-2 • 1000 nodes • 300 active peers • Random peer arrival/departure: ON (5 min)/OFF (30 s) • Over-provisioned backbone • Typical access bandwidth distribution • Delay: 5 ms/link + congestion • Video streaming • Compression H.264 at 220 kbps • 15 minute live multicast [Setton, Noh, Girod, ACM MM 2005]

  38. Join and Rejoin Latencies [Setton, Noh, Girod, ACM MM 2005]

  39. Congestion-DistortionOptimized P2P Live Streaming Without CoDiO % peersconnected to 4/4 trees With CoDiO % peersconnected to 4/4 trees [Setton, Noh, Girod, ACM MM 2005]

  40. P2P Video Multicast: 64 out of 300 Peers Congestion-distortion optimized (CoDiO) streaming Without CoDiO H.264 @ 220 kbps2 sec latency for all streams

  41. Concluding Remarks • Over-provisioning makes QoS superfluous • Elastic applications don’t need QoS • Joint rate control for access bottlenecks (e.g. IPTV, WLAN) • Media-aware congestion control (e.g. CoDiO) • Multipath routing to mitigate congestion • P2P viable alternative for content delivery networks Client-server  edge-based  P2P

  42. The End http://www.stanford.edu/~bgirod/publications.html

More Related