1 / 33

Topics of Discussion

TBI: End-to-End Network Performance Testbed for Empirical Bottleneck Detection Prasad Calyam, OARnet, A Division of The Ohio Supercomputer Center, The Ohio State University TRIDENTCOM, Trento, Italy, February 2005 Dima Krymskiy, Mukundan Sridharan, Paul Schopis. Topics of Discussion.

aolani
Download Presentation

Topics of Discussion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TBI: End-to-End Network Performance Testbed for Empirical Bottleneck DetectionPrasad Calyam,OARnet, A Division of The Ohio Supercomputer Center,The Ohio State University TRIDENTCOM, Trento, Italy, February 2005Dima Krymskiy, Mukundan Sridharan, Paul Schopis

  2. Topics of Discussion • Third Frontier Network Measurement Project • Basics of Network Measurement Infrastructures (NMIs) • “ActiveMon” NMI Software • TFN Beacon Infrastructure (TBI) Testbed • End-to-End Network Performance Bottleneck Detection Case Studies! • Work in progress… • Conclusion

  3. Third Frontier Network • The Third Frontier Network (TFN) funded by the Ohio Board of Regents • A dedicated high-speed fiber-optic network linking Ohio colleges and universities with research facilities to promote research and economic development • Over 1,600 miles of fiber has been purchased to create a network backbone in Ohio to connect colleges and universities, K-12 schools, and communities together

  4. Third Frontier Network (2)

  5. TFN Measurement Project • Started in early 2004 • Project funding from the Ohio Board of Regents • To ensure that University campuses can effectively use the advanced networking services the new network provides • Project Partners • OARnet (Project Lead and Co-ordination) • University of Cincinnati, Cincinnati State, The Ohio State University, Kent State University, Southern State Community College, University of Toledo, Wright State University

  6. Project Goals • Identify end-to-end performance bottlenecks in the TFN on an ongoing fashion by building a comprehensive Network Measurement Infrastructure (NMI) • Test new and advanced technologies and equipment before wide-scale adoption in the TFN Higher Education communities • Technologies: H.323/SIP based Voice and Videoconferencing, MPEG3, HDTV, Multicast, Bulk FTP • Equipment: Video streaming Caches, Firewalls, Intrusion Detection Systems, Traffic shapers • Bring awareness and train campus-networking professionals to make optimum use of the capabilities of TFN so that their campus network infrastructures can be upgraded suitably

  7. Project Goals • Identify end-to-end performance bottlenecks in the TFN on an ongoing fashion by building a comprehensive Network Measurement Infrastructure (NMI) • Test new and advanced technologies and equipment before wide-scale adoption in the TFN Higher Education communities • Technologies: H.323/SIP based Voice and Videoconferencing, MPEG3, HDTV, Multicast, Bulk FTP • Equipment: Video streaming Caches, Firewalls, Intrusion Detection Systems, Traffic shapers • Bring awareness and train campus-networking professionals to make optimum use of the capabilities of TFN so that their campus network infrastructures can be upgraded suitably

  8. What constitutes the “infrastructure”in an NMI? • Measurement Toolkit hosted on “Beacon” servers • Scheduler for measurement orchestration between multiple measurement servers • Large database for storage and archiving measurement data • Analysis Engine and web service for visualization of network performance data • AAA procedures for handling Security/Privacy Issues

  9. A Typical NMI

  10. Examples of NMI Software and Deployments… • NIMI (Developed by Vern Paxson, currently maintained by NLANR) • Surveyor (Developed by Advanced, currently maintained by Univ. Of Wisconsin) • AMP (NLANR) • IEPM-BW (SLAC) • Scriptroute (Univ. of Washington) • E2E piPES (Internet2) • Many Many More… (ETOMIC, D-ITG, …) • ActiveMon (Being developed by OARnet)

  11. “ActiveMon” NMI Software • An “application-specific” active measurement toolkit • To identify bottlenecks pertaining to specific applications; e.g., Voice and Video over IP (Delay, Jitter, Loss, Reordering), Bulk FTP flows (Throughput, Available/Bottleneck Bandwidth, Web server/ Oracle server response times, etc… • An advanced active measurements scheduler-”OnTimeMeasure” • To orchestrate and regulate network-wide active measurements • An analysis engine with web interface • To correlate performance along multiple paths • To generates appropriate alerts for concerned personnel! • Support for federated measurements… • To support end-to-end performance debugging along multiple ISP domains

  12. ActiveMon Measurement Toolkit • Tools used for end-to-end network performance data collection • Ping(Delay) Open Source • Traceroute(Topology) Open Source • Iperf(TCP/UDP Bandwidth Performance) Open Source • Pathchar(Hop-by-Hop performance) Open Source • Pathload(Available Bandwidth) Open Source • Pathrate(Bottleneck Bandwidth) Open Source • OWAMP(µs Precision Delay – CDMA+NTP) Open Source • H.323 Beacon(Voice/Video Performance) Open Source • appareNet (Hop-by-Hop Performance) Commercial • NetQoS(Application Response Times)Commercial

  13. “OnTimeMeasure” Scheduler • Active measurements can encroach network bandwidth required for actual application traffic • Active measurements need to be regulated • Running multiple simultaneous measurements on monitoring probes could result in misleading reports of network performance • Active measurements require dedicated system and network resources • Network Interface Card (NIC), CPU processing, Application ports, Multimedia codecs, Bandwidth, … • Active measurements between measurement beacons need to be orchestrated

  14. Results in a LAN with WAN Emulation!

  15. OnTimeMeasure Framework

  16. OnTimeMeasure Framework

  17. ActiveMon Analysis Engine • To retrieve relevant “summary-views” of the large network-wide active measurement datasets • “Weather-map” functionality, Query-able XML schemas, … • To effectively identify anomalies and alert relevant support and operations personnel • Target for a low probability of false-alarms • Cover anomalies that indicate better/poor/marginal changes • To perform multi-path data correlation to isolate performance problems involving multiple-links • Observed end-to-end performance is a function of performance of individual intermediate links

  18. Example: Alarm check to notify network health status • Watermarks for “Good”, “Acceptable” and “Poor” grade of audiovisual quality as experienced by end-user • Delay: (0-150)ms, (150-300)ms, > 300ms • Jitter: (0-20)ms, (20-50)ms, > 50ms • Loss: (0-0.5)%, (0.5-1.5)%, >1.5% Prasad Calyam, Mukundan Sridharan, Weiping Mandrawa, Paul Schopis, “Performance Measurement and Analysis of H.323 Traffic”, Proceedings ofPassive and Active Measurement Workshop (PAM), 2004

  19. TFN Beacon Infrastructure (TBI) Testbed

  20. TFN Beacon Infrastructure (TBI) Testbed

  21. Paths being Measured

  22. Purpose of TBI Testbed • To understand network end-to-end performance characteristics • Via partial path and intermediate bottleneck hop analysis • To understand network performance measurement data reported by various tools • How good are they to empirically correlate network events in a routine monitoring infrastructure? • To compare performance at campus, regional, national-academic and national-commerical backbone network levels • To quantify end-to-end network performance stability in the Internet

  23. End-to-End Network Performance Bottleneck Detection Case Studies! • Anomalies in measured paths… (2-month period) • Effects of route changes • Network device misconfiguration • Misrepresentation of network health by measurement tool misconfiguration • Performance comparison of campus, regional, national-academic and national-commercial backbone networks using H.323 Beacon

  24. Effects of route changes RTT OSUB to OSUL Jitter OSUB to OSUL(both directions) RTT OSUL to OSUB

  25. Effects of route changes (2) RTT OSUB to UOCB RTT UOCB to OSUB

  26. Network device misconfiguration Available Bandwidth OSUB to UOCB

  27. “Ocean Wave” Anomaly of OWAMP RTT UOCB to OSUB

  28. H.323 Beacon* • An application-specific measurement tool • To monitor and qualify the performance of H.323 Videoconferencing sessions at the host and in the network (end-to-end) • Useful to an end-user/conference operator/network engineer • Addresses problems due to H.323 protocol-specific idiosyncrasies • Can be generalized to RTP packets performance over the network • Many in-built tools that generate various kinds of measurement data for pre/during/post Videoconference troubleshooting! • An “easy to install and use” tool that is open source * Project supported by Internet2, The Ohio Board of Regents, OARnet

  29. A few H.323 Beacon screenshots… http://www.itecohio.org/beacon

  30. Performance comparison of academic and commercial network backbones using H.323 Beacon • Academic networks most suitable for Voice and Video over IP applications • Bottlenecks of multi-domain/last-mile links mainly impact end-to-end performance Most Stable Least Stable

  31. ActiveMon work in progress… • Extend testbed into a production-level NMI spanning TFN and major Ohio-based University Campuses • Better multi-link data correlations in Analysis Engine • DDoS anomaly detection using active measurement data signatures • OnTimeMeasure extensions using real-time scheduling principles for various active measurement specifications • Better visualization module… • Planning on ActiveMon 1.0 release on sourceforge (Summer 2005) http://sourceforge.net/projects/activemon/

  32. Thanks! • Scripts Development and Data Analysis • Mukundan Sridharan, Dima Krymskiy, Phani Kumar Arava • Project Management • Steve Gordon, Paul Schopis • OSU Border and Lab Deployment • Prof. David Lee, Dave Kneisly, Arif Khan, Weiping Mandrawa • UC Border and Lab Deployment • Prof. Jerry Paul, Prof. Fred Annexstein, Bruce Burton, Bill Bohmer, Tom Ridgeway, Michal Kouril, Diana Noelcke • NCSU Deployment • John Moore, Chintan Desai • Tools Deployment • Loki Jorgenson, Chris Norris (appareNet) • Jeff Boote (OWAMP) • Leandro Lustoza (H.323 Beacon E-Model implementation)

  33. Questions? TFN Measurement Project Reference: http://tfn.oar.net/measurement

More Related