measurement study of low bitrate internet video streaming
Download
Skip this Video
Download Presentation
Measurement Study of Low-bitrate Internet Video Streaming

Loading in 2 Seconds...

play fullscreen
1 / 31

Measurement Study of Low-bitrate Internet Video Streaming - PowerPoint PPT Presentation


  • 97 Views
  • Uploaded on

Measurement Study of Low-bitrate Internet Video Streaming. Dmitri Loguinov Hayder Radha. City University of New York and Philips Research. Motivation. Market research by Telecommunications Reports International (TRI) from August 2001

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Measurement Study of Low-bitrate Internet Video Streaming' - tamas


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
measurement study of low bitrate internet video streaming

Measurement Study of Low-bitrate Internet Video Streaming

Dmitri LoguinovHayder Radha

City University of New York and Philips Research

motivation
Motivation
  • Market research by Telecommunications Reports International (TRI) from August 2001
  • The report estimates that out of 70.6 million households in the US with Internet access, an overwhelming majority use dialup modems (Q2 of 2001)
motivation cont d
Motivation (cont’d)
  • Consider broadband (cable, DSL, and satellite) Internet access vs. narrowband (dialup and webTV)
  • 89% of households use modems and 11% broadband
motivation cont d1
Motivation (cont’d)
  • Customer growth in Q2 of 2001:
    • +5.2% dialup
    • +0.5% cable modem
    • +29.7% DSL
    • Even though DSL grew fast in early 2001, the situation is now different with many local companies going bankrupt and TELCOs increasing the prices
  • A report from Cahners In-Stat Group found that despite strong forecasted growth in broadband access, most households will still be using dial-up access in the year 2005 (similar forecast in Forbes ASAP, Fall 2001, by IDC market research)
  • Furthermore, 30% of US households state that they still have no need or desire for Internet access
motivation cont d2
Motivation (cont’d)
  • Our study was conducted in late 1999 – early 2000
  • At that time, even more Internet users connected through dialup modems, which sparked our interest in using modems as the primary access technology for out study
  • However, even today, our study could be considered up-to-date given the numbers from recent market research reports
  • In fact, the situation is unlikely to change until year 2005
  • Our study examines the performance of the Internet from the angle of an average Internet user
motivation cont d3
Motivation (cont’d)
  • Note that previous end-to-end studies positioned end-points at universities or campus networks with typically high-speed (T1 or faster) Internet access
  • Hence, the performance of the Internet perceived by a regular home user remains largely undocumented
  • Rather than using TCP or ICMP probes, we employed real-time video traffic in our work, because performance studies of real-time streaming in the Internet are quite scarce
  • The choice of NACK-based flow control was suggested by RealPlayer and Windows MediaPlayer, which dominate the current Internet streaming market
overview of the talk
Overview of the Talk
  • Experimental Setup
  • Overview of the Experiment
  • Packet Loss
  • Underflow Events
  • Round-trip Delay
  • Delay Jitter
  • Packet Reordering
  • Asymmetric Paths
  • Conclusion
experimental setup
Experimental Setup
  • MPEG-4 client-server real-time streaming architecture
  • NACK-based retransmission and fixed streaming bitrate (i.e., no congestion control)
  • Stream S1 at 14 kb/s (16.0 kb/s IP rate), Nov-Dec 1999
  • Stream S2 at 25 kb/s (27.4 kb/s IP rate), Jan-May 2000
overview
Overview
  • Three ISPs (Earthlink, AT&T WorldNet, IBM Global Net)
  • Phone database included 1,813 dialup points in 1,188 cities
  • The experiment covered 1,003 points in 653 US cities
  • Over 34,000 long-distance phone calls
  • 85 million video packets, 27.1 GBytes of data
  • End-to-end paths with 5,266 Internet router interfaces
  • 51% of routers from dialup ISPs and 45% from UUnet
  • Sets D1p and D2p contain successful sessions with streams S1 and S2, respectively
overview cont d
Overview (cont’d)
  • Cities per state that participated in the experiment
overview cont d1
Overview (cont’d)
  • Streaming success rate during the day shown below
  • Varied between 80% during the night (midnight – 6 am) to 40% during the day (9 am – noon)
overview cont d2
Overview (cont’d)
  • Average end-to-end hop count 11.3 in D1p and 11.9 in D2p
  • The majority of paths contained between 11 and 13 hops
  • Shortest path – 6 hops, longest path – 22 hops
packet loss
Packet Loss
  • Average packet loss was 0.5% in both datasets
  • 38% of 10-min sessions with no packet loss
  • 75% with loss below 0.3%
  • 91% with loss below 2%
  • During the day, average packet loss varied between 0.2% (3 am - 6 am) and 0.8% (9am - 6pm EDT)
  • Per-state packet loss varied between 0.2% (Idaho) to 1.4% (Oklahoma), but did not depend on the average RTT or the average number of end-to-end hops in the state (correlation –0.16 and –0.04, respectively)
packet loss cont d1
Packet Loss (cont’d)
  • 207,384 loss bursts and 431,501 lost packets
  • Loss burst lengths (PDF and CDF):
packet loss cont d2
Packet Loss (cont’d)
  • Most bursts contained no more than 5 packets (however, the tail reached to over 100 packets)
  • RED was disabled in the backbone; still 74% of loss bursts contained only 1 packet apparently dropped in FIFO queues
  • Average burst length was 2.04 packets in D1p and 2.10 packets in D2p
  • Conditional probability of packet loss was 51% and 53%, respectively
  • Over 90% of loss burst durations were under 1 second (maximum 36 seconds)
  • The average distance between lost packets was 21 and 27 seconds, respectively
packet loss cont d3
Packet Loss (cont’d)
  • Apparently heavy-tailed distributions of loss burst lengths
  • Pareto with  = 1.34; however, note that data was non-stationary (time of day or access point non-stationarity)
underflow events
Underflow events
  • Missing packets (frames) at their decoding deadlines cause buffer underflows at the receiver
  • Startup delay used in the experiment was 2.7 seconds
  • 63% (271,788) of all lost packets were discovered to be missing before their deadlines
  • Out of these 63% of lost packets:
    • 94% were recovered in time
    • 3.3% were recovered late
    • 2.1% were never recovered
  • Retransmission appears quite effective in dealing with packet loss, even in the presence of large end-to-end delays
underflow events cont d
Underflow events (cont’d)
  • 37% (159,713) of lost packets were discovered to be missing after their deadlines had passed
  • This effect was caused by large one-way delay jitter
  • Additionally, one-way delay jitter caused 1,167,979 data packets to be late for decoding
  • Overall, 1,342,415 packets were late (1.7% of all sent packets), out of which 98.9% were late due to large one-way delay jitter rather than due to packet loss combined with large RTT
  • All late packets caused the “freeze-frame” effect for 10.5 seconds on average in D1p and 8.5 seconds in D2p (recall that each session was 10 minutes long)
underflow events cont d1
Underflow events (cont’d)
  • 90% of late retransmissions missed the deadline by no more than 5 seconds, 99% by no more than 10 seconds
  • 90% of late data packets missed the deadline by no more than 13 seconds, 99% by no more than 27 seconds
round trip delay
Round-trip Delay
  • 660,439 RTT samples
  • 75% of samples below 600 ms and 90% below 1 second
  • Average RTT was 698 ms in D1p and 839 ms in D2p
  • Maximum RTT was over 120 seconds
  • Data-link retransmission combined with low-bitrate connection were responsible for pathologically high RTTs
  • However, we found access points with 6-7 second IP-level buffering delays
  • Generally, RTTs over 10 seconds were considered pathological in our study
round trip delay cont d
Round-trip Delay (cont’d)
  • Distributions of the RTT in both datasets (PDF) were similar and contained a very long tail
round trip delay cont d1
Round-trip Delay (cont’d)
  • Distribution tails closely matched hyperbolic distributions (Pareto with  between 1.16 and 1.58)
round trip delay cont d2
Round-trip Delay (cont’d)
  • The average RTT varied during the day between 574 ms (3 am – 6 am) and 847 ms (3 pm – 6 pm) in D1p
  • Between 723 ms and 951 ms in D2p
  • Relatively small increase in the RTT during the day (by only 30-45%) compared to that in packet loss (by up to 300%)
  • Per-state RTT varied between 539 ms (Maine) and 1,053 ms (Alaska); Hawaii and New Mexico also had average RTTs above 1 second
  • Little correlation between the RTT and geographical distance of the state from NY
  • However, much stronger positive correlation between the number of hops and the average state RTT:  = 0.52
delay jitter
Delay Jitter
  • Delay jitter is one-way delay variation
  • Positive values of delay jitter are packet expansion events
  • 97.5% of positive samples below 140 ms
  • 99.9% under 1 second
  • Highest sample 45 seconds
  • Even though large delay jitter was not frequent, once a single packet was delayed by the network, the following packets were also delayed creating a “snowball” of late packets
  • Large delay jitter was very harmful during the experiment
packet reordering
Packet Reordering
  • Average reordering rates were low, but noticeable
  • 6.5% of missing packets (or 0.04% of sent) were reordered
  • Out of 16,852 sessions, 1,599 (9.5%) experienced at least one reordering event
  • The highest reordering rate per ISP occurred in AT&T WorldNet, where 35% of missing packets (0.2% of sent packets) were reordered
  • In the same set, almost half of the sessions (47%) experienced at least one reordering event
  • Earthlink had a session where 7.5% of sent packets were reordered
packet reordering cont d
Packet Reordering (cont’d)
  • Reordering delay Dr is time between detecting a missing packet and receiving the reordered packet
  • 90% of samples Dr below 150 ms, 97% below 300 ms, 99% below 500 ms, and the maximum sample was 20 seconds
packet reordering cont d1
Packet Reordering (cont’d)
  • Reordering distance is the number of packets received during the reordering delay (84.6% of the time a single packet, 6.5% exactly 2 packets, 4.5% exactly 3 packets)
  • TCP’s triple-ACK avoids 91.1% of redundant retransmits and quadruple-ACK avoids 95.7%
path asymmetry
Path Asymmetry
  • Asymmetry detected by analyzing the TTL of the returned packets during the initial traceroute
  • Each router reset the TTL to a default value (such as 255) when sending a “TTL expired” ICMP message
  • If the number of forward and reverse hops was different, the path was “definitely asymmetric”
  • Otherwise, the path was “possibly (or probably) symmetric”
  • No fail-proof way of establishing path symmetry using end-to-end measurements (even using two traceroutes in reverse directions)
path asymmetry cont d
Path Asymmetry (cont’d)
  • 72% of sessions operated over definitely asymmetric paths
  • Almost all paths with 14 or more end-to-end hops were asymmetric
  • Even the shortest paths (with as low as 6 hops) were prone to asymmetry
  • “Hot potato” routing is more likely to cause asymmetry in longer paths, because they are more likely to cross AS borders than shorter paths
  • Longer paths also exhibited a higher reordering probability than shorter paths
conclusion
Conclusion
  • Dialing success rates were quite low during the day (as low as 40%)
  • Retransmission worked very well even for delay-sensitive traffic and high-latency end-to-end paths
  • Both RTT and packet-loss bursts appeared to be heavy-tailed
  • Our clients experienced huge end-to-end delays both due to large IP buffers as well as persistent data-link retransmission
  • Reordering was frequent even given our low bitrates
  • Most paths were in fact asymmetric, where longer paths were more likely to be identified as asymmetric
ad