1 / 50

Introduction

INCITE. Introduction. Jiří Navrátil SLAC. Project Partners and Researchers. INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL. Rice University

zandra
Download Presentation

Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INCITE Introduction Jiří Navrátil SLAC

  2. Project Partners and Researchers INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL Rice University Richard Baraniuk, Edward Knightly, Robert Nowak, Rudolf Riedi Xin Wang, Yolanda Tsang, Shriram Sarvotham, Vinay Ribeiro Los Alamos National Lab (LANL) Wu-chun Feng, Mark Gardner, Eric Weigle Stanford Linear Accelerator Center (SLAC) Les Cottrell, Warren Matthews, Jiri Navratil

  3. Project Goals INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL • Objectives • scalable, edge-based tools for on-line network analysis, modeling, and measurement • Based on • advanced mathematical theory and methods • Designeted for • support high-performance computing infrastructures, such as computational grids, • ESNET, Internet2 and other HPNetworking project

  4. Project Elements INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL • Advanced techniques • from networking, supercomputing, statistical signal processing, applied mathematics • Multiscale analysis and modeling • understand causes of burstiness in network traffic • realistic, yet analytically tractable, statistically robust, and computationally efficient modeling • On-line inference algorithms • characterize and map network performance as a function of space, time, application, and protocol • Data collection tools and validation experiments

  5. Scheduled Accomplishments INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL • Multiscale traffic models and analysis techniques • based on multifractals, cascades, wavelets • study how large flows interact and cause bursts • study adverse modulation of application-leveltraffic by TCP/IP • Inference algorithms for paths, links, and routers • multiscale end-to-end path modeling and probing • network tomography (active and passive) • Data collection tools • add multiscale path, link inference to PingER suite • integrate into ESnet NIMI infrastructure • MAGNeT – Monitor for Application-Generated Network Traffic • TICKET – Traffic Information-Collecting Kernel with Exact Timing

  6. Future Research Plans INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL • New, high-performance traffic models • guide R&D of next-generation protocols • Application-generated network traffic repository • enable grid and network researchers to test and evaluate new protocols with actual traffic demands of applications rather than modulated demands • Multiclass service inference • enable network clients to assess a system's multi-class mechanisms and parameters using only passive, external observations • Predictable QoS via end-point control • ensure minimum QoS levels to traffic flows • exploit path and link inferences in real-time end-point admission control

  7. Surveyor NIMI Pinger RIPE There is no vacuum Optivity CiscoWorks Spectrum HpOpenview

  8. JNFLOW Cisco-Netflows

  9. FPP phase (From Papers to Practice) MWFS, TOMO, TOPO

  10. 20 ms ~300 ms 40 T for new set of values (12 sec)

  11. First results BWe: 9,875 Mbps for 10 Mbps Ethernet CT-Graph

  12. What has been done • Phase 1 - Remodeling - Code separation (BW and CT) - Find how to call MATLAB from another program - Analyze Results and data - Find optimal params for model • Phase 2 - Webing of BW estimate

  13. Data Dispersions from sunstats.cern.ch

  14. ccnsn07.in2p3.fr sunstats.cern.ch pcgiga.cern.ch plato.cacr.caltech.edu

  15. pcgiga.cern.ch default WS BW ~ 70Mbps pcgiga.cern.ch WS 512K BW ~ 100 Mbps

  16. Reaction to the network problems

  17. Problems ??? ? ? ? network software licence

  18. After tuning More optimistics results

  19. MF-CT Features and benefits • No need access to routers ! • Current monitoring systems for Load of traffic are based on SNMP or Flows (needs access to routers) • Low cost: • Allows permanent monitoring (20 pkts/sec ~ overhead 10 Kbytes/sec) • Can be used as data provider for ABW prediction (ABW=BW-CT) • Weak point for common use MATLAB code

  20. Future work on CT • Verification model • Define and setup verification model (S+R) • Measurements (S) • Analyze results (S+R) • On-line running on selected sites • Prepare code for automation and Webing (S) • CT-Code modificaton ? (R)

  21. MF-CT verification model UDP echo SNMP counter CERN SNMP counter internet SNMP counter IN2P3 SLAC SNMP counter MF-CT Simulator UDP echo

  22. CTRE-ENGINEERING For practical monitoring would be necessary to do modification for using it in different modes: • Continuos modefor monitoring one site in Large time scale (hours) • Accumulation mode(1 min, 5 min, ?) for running for more sites in parallel • ? Solution without MATLAB ?

  23. 2 NEW 2Ls coming soon

  24. Rob Nowak (and CAIDA people) say: www.caida.org This is internet

  25. Network Topology Identification Ratnasamy & McCanne (99) Duffield, et al (00,01,02) Bestavros, et al (01) Coates, et al (01) Pairwise delay measurements reveal topology

  26. Network Tomography source router / node link receivers Measureend-to-end (from source to receiver) losses/delays Infer link-level (at internal routers) loss rates and delay distributions

  27. ‘0’ loss ‘1’ success ‘0’ loss ‘1’ success Unicast Network Tomography Measure end-to-end losses of packets Cannot isolate where losses occur !

  28. Packet Pair Measurements cross-traffic delay measurement packet pair

  29. Packets experience the same delay on link 1 Extra delay on link 3 Delay Estimation Measure end-to-end delays of packet-pairs

  30. record occurrences of losses and delays Packet-pair measurements • Key Assumptions: • fixed routes • iid pair-measurements • losses & delays on • each link are mutually • independent • packet-pair losses & • delays on shared links • are nearly identical

  31. 2 10 10 1 0.5 10 2 2 5 1 0.5 ns Simulation • 40-byte packet-pair probes every 50 ms • competing traffic comprised of: • on-off exponential (500 byte packets) • TCP connections (1000 byte packets) cross-traffic link 9 Kbytes/s time (s) Test network showing link bandwidths (Mb/s)

  32. Future work on TM and TP • Model in frame of Internet (~100 sites) • Define verification model (S+R) • Deploy and install code on sites (S) • First measurements (S+R) • Analyze results (form,speed,quantity) (S+R) • ? Code modificaton (R) • Production model? • Compete with Pinger, RIPE, Surveyor, Nimi? • How to unify VIRTUAL structure with Real

More Related