230 likes | 335 Views
A Measurement Study of Internet Bottlenecks. N. Hu (CMU) L. Li (Bell labs) Z. M. Mao. (U. Michigan) P. Steenkiste (CMU) J. Wang (AT&T) Infocom 2005 Presented By Mohammad Malli PhD student seminar Planete Project. Goals. Recently, many active probing tools have been
E N D
A Measurement Study of Internet Bottlenecks N. Hu (CMU) L. Li (Bell labs) Z. M. Mao. (U. Michigan)P. Steenkiste (CMU) J. Wang (AT&T)Infocom 2005 Presented ByMohammad MalliPhD student seminar Planete Project
Goals Recently, many active probing tools have been developed for measuring and locating bandwidth bottlenecks, but Q1. How persistent are the Internet bottlenecks? • Important for measurement frequency Q2. Are bottlenecks shared by end users within the same prefix? • Useful for path bandwidth inference Q3. What relationship exists between bottleneck and packet loss and queuing delay? • Useful for congestion identification Q4. What relationship exists between bottleneck and router and link properties? • Important for traffic engineering November 28, 2005
Related Work • Persistence of Internet path properties • Zhang [IMW-01], Paxson [TR-2000], Labovitz [TON-1998, Infocom-1999] • loss, delay, pkt ordering,.. • The persistence of the bottleneck location does not considered • Congestion points sharing • Katabi [TR-2001], Rubenstein [Sigmetrics-2000] • Flows-based study and not e2e paths-based • Correlation among Internet path properties • Paxson [1996] • e2e level and not at the location level • Correlation between router and link properties • Agarwal [PAM 2004] November 28, 2005
Day-1 Day-2 … Day-38 Data collection • Probing • Source: a CMU host • Destinations: 960 IP addresses • 10 continuous probings for each destination (1.5 minutes) • Repeat for 38 days (for persistence study) D D CMU 960 Internet Destinations D S D D D D November 28, 2005
Pathneck • An active probing tool that can detect Internet bottleneck location • For details, refer to “Locating Internet Bottlenecks: Algorithms, Measurements, and Implications” [SIGCOMM’04] • Source code: www.cs.cmu.edu/~hnn/pathneck • Pathneck characteristics • Low overhead (i.e., in order of 10s-100s KB) • Single-end control (sender only) • Pathneck output used in this work • Bottleneck link location • Route November 28, 2005
measurement packets measurement packets 1 2 30 30 2 1 30 pkts, 60 B 30 pkts, 60 B Recursive Packet Train (RPT) in Pathneck Load packets are used to measure available bandwidth Measurement packets are used to obtain location information Load packets 255 255 255 255 60 pkts, 500 B TTL UDP packets November 28, 2005
Gap value Sender Router Packet train Time axis November 28, 2005
Gap value Sender Router Drop m. packet Send ICMP November 28, 2005
Gap value Sender Router Drop m. packet Send ICMP Recv ICMP November 28, 2005
Gap value Sender Router Drop m. packet Send ICMP Drop m. packet Send ICMP Recv ICMP November 28, 2005
Gap value RPT probing is repeated 10 times for each pair of nodes Sender Router Drop m. packet Send ICMP Drop m. packet Send ICMP Recv ICMP Recv ICMP Gap value November 28, 2005
Terminology Persistent probing set is the probing set where all n probings follow the same route November 28, 2005
Route Persistence • Route change is very common and must be considered for bottleneck persistence analysis • Consistent with the results from Zhang, et. al. [IMW-01] on route persistence AS level Location level over 9 days November 28, 2005
Bottleneck Persistence • Persistence of a bottleneck R • Bottleneck Persistence of a path Max(Persist(R)) for all bottlenecks R • Two views: • End-to-end view ― per (src, dst) pair • Includes the impact of route change • Route-based view ― per route • Removes the impact of route change # of persistent probing sets R is bottleneck Persist(R) = # of persistent probing sets R appears November 28, 2005
1 2 2 Bottleneck Persistence • Bottleneck persistence in route-based view is higher than end-to-end view • AS-level bottleneck persistence is very similar to that from location level • 20% bottlenecks have perfect persistence in end-to-end view, and 30% for route-based view 3 November 28, 2005
Results summary • Only 20-30% Internet bottlenecks have perfect persistence • Application should be ready for bottleneck location change • Bottleneck locations have a strong (60%) correlation with packet loss locations (2 hops away) • Bottleneck and loss detections should be used together for congestion detection • Only less than 10 % of the destinations in a prefix cluster share a bottleneck more than half of the time • End users can not assume common bottlenecks • Bottleneck has no clear relationship with link capacity, router CPU load, and memory usage • A clear correlation between bottlenecks and link loads • Network engineers should focus on traffic load to eliminate bottlenecks November 28, 2005
Limitations Interesting study but .. • How much the obtained statistics are representative for the whole Internet, since • the few sources used for probing are a CMU node, 8 Planetlab nodes, and 13 RON nodes • the number of probed destinations are 960 • <<< # of Internet paths • Pathneck limitations • Load pkts are larger than what the firewalls permit • only forward the 60 byte UDP packets • Anyway, Pathneck is not able to measure the pkt train length on the last link due to ICMP rate limiting • theoricaly, the destination must send a ‘destination port unreachable’ for each pkt November 28, 2005
Thank you for your listening November 28, 2005
Backup November 28, 2005
Bottleneck vs. loss | delay • Possible congestion indication • Large queuing delay • Packet loss • Bottleneck • They do not always occur together • Packet scheduling algorithm large queuing delay • Traffic burstiness or RED packet loss • Small link capacity bottleneck • Bottleneck ? link loss | large link delay November 28, 2005
Trace • Collected on the same set of 960 paths, but independent measurements • Detect bottleneck location using Pathneck • Detect loss location using Tulip • Only use the forward path results • Detect link queuing delay using Tulip • medianRTT – minRTT • [ Tulip was developed in University of Washington, SOSP’03 ] • The analysis is based on the 382 paths for which both bottleneck location and packet loss are detected November 28, 2005
Bottleneck Packet Loss November 28, 2005
Bottleneck Queueing Delay November 28, 2005