1 / 40

Abilene Observatory

Abilene Observatory. Presented by Chris Robb Indiana University APAN Meeting, Pusan 2003 Slides Prepared by Chris Small. Abilene Observatory.

love
Download Presentation

Abilene Observatory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Abilene Observatory Presented by Chris Robb Indiana University APAN Meeting, Pusan 2003 Slides Prepared by Chris Small

  2. Abilene Observatory The Abilene Observatory is a program to support the collection and dissemination of network data associated with the Abilene Network . It provides Network Engineers with an operational view of the network and also provides researchers a platform to conduct experiments and collect data from a High-Performance network

  3. Abilene Observatory Internet2 Page: http://abilene.internet2.edu/observatory • Overview of the project • Proposal Process • Data Views

  4. Components The Observatory consists of two components: • Data Collected by equipment run by the Abilene NOC (Network Management Machines) • Data collected by separate research on co-located equipment at the Abilene Router Nodes

  5. How to get involved • Retrieve Existing Data • Deploy a Co-Located project • Make a suggestion

  6. Retrieve Existing Data There is a large existing amount of data collected under the Abilene Observatory program. Some of it is available publicly through some of the links listed in this presentation. However there is some data due to its size or format, such as a stream of NetFlow data, that can only be available upon request. To gain access to this data please contact abilene@internet.edu

  7. Current Data and Tools • Netflow • Owamp (One-Way Latency) • Iperf • SNMP Interface Statistics • Internet2 Detective • Multicast Beacon • NTP Stratum 2 Server • Ping/Traceroute V6 Destination

  8. Deploy a Co-Located Project The Abilene Observatory has reserved space for researchers to deploy equipment in the Abilene Observatory Rack. The first step in deploying a co-location project is to submit a proposal to abilene@internet2.edu

  9. Co-Location Proposal There is some information that we be needed for all Co-location projects. The information includes: • Description of the Project including participants and duration • Space, Network Power Requirements • System Information • Security

  10. Co-Located Machines Participation is open to all members (university, corporate, or affiliates) of the Internet2 project and is based on competitive proposals Proposal information at: http://abilene.internet2.edu/observatory/proposal-process.html

  11. Co-Location Caveats • Commodity Routes not available • 48V DC Power • 23” Racks • Address Space • Security • “Lights Out” Remote Operation

  12. Co-Location Example PlanetLab PlanetLab is a global overlay network for developing and accessing new network services. Designed for short-term experiments and long-term services. Currently deployed in three Abilene nodes with two machines in each node. Deployment to all other nodes will start in late Aug.

  13. Make a suggestion <insert your project here>

  14. Observatory Rack In each Abilene Router Node there is a one rack dedicated to the Observatory project. The rack is dedicated for: • Abilene NOC Administrated Network Management Machines (NMS) • Co-located Machines • DC Power Controllers for Observatory machines No Routing equipment is in the Observatory Rack

  15. Observatory Rack (cont) Each Observatory Rack contains at least: • 4 NMS Machines • One 8 Port DC Power Controler Some Racks contain additional Co-Located machines

  16. Rack Front View

  17. Rack Rear View

  18. NMS Machines NMS Machine Specs • 2x 1.26 Ghz Xeons • FreeBSD (Linux as option) • 1 GB Memory • 2x18GB SCSI Disks • GigE Fiber (NMS1 and 2) or FastE connected • DC Powered

  19. Advanced Services Since the NMS Machines are directly on the Abilene Backbone there are many advantages to test “Advanced” Services • Native V6 • Native V4 and V6 Multicast • 1Gb ports (NMS1) directly connected to the backbone • 9000 MTU (NMS1)

  20. NMS Infrastructure NMS Infrastructure Page: http://loadrunner.uits.iu.edu/~neteng/nms • Links to currently running services on each machine • Alerts related to NMS machines • System Performance statistics • Maps and Diagrams

  21. Monitoring The state of the NMS machines are closely monitored. Nagios/AlertMon monitor and display alerts if any machine or service is down Ganglia Cluster Toolkit is used for system (load,mem,disk usage, etc..) monitoring

  22. Additional Machines In addition to the Abilene Observatory Rack in the Router Nodes there are machines These are used as central points to collect and store data. These include: • Ndb1-blmt – Owamp, Iperf. Traceroute Database • www.itec.oar.net - Netflow • Stryper.uits.iu.edu – SNMP Interface Statistics • Loadrunner.uits.iu.edu – Visual Backbone and Multicast server

  23. Data and Tools In-Depth • Netflow • Owamp (One-Way Latency) • Iperf • Visual Backbone • SNMP Interface Statistics • Internet2 Detective • Multicast Beacon • NTP Stratum 2 Server • Ping/Traceroute V6 Destination

  24. NetFlow Sampled (100:1) Netflow is sent from all Abilene Routers to one of the local NMS machines. The flows are sent to researchers and also cashed locally and retrieved to a central storage using rsync. The Netflow records are anatomized by masking the low-order 11 bits of the IP address. Unanatomized data is not stored.

  25. Netflow Reports There are two widely available reports generated from the raw data • The Internet2 Weekly Netflow Report: http://netflow.internet2.edu/weekly • The Nightly reports at ITEC-Ohio: http://www.itec.oar.net/abilene-netflow

  26. Netflow Data Netflow Data is available ether as a direct feed from the NMS Machines or as a download from the centralized storage area at the Ohio ITEC Please contact abilene@internet2.edu to obtain more information if you want access to the raw data

  27. NetFlow Users Some of the users of Netflow data • WAIL: The Wisconsin Advanced Internet Laboratory • Network Research Lab at Case Western Reserve • Kent State University Computer Science Dept • Boston University, Dept of Computer Science and Department of Mathematics and Statistics • MINDS Project, Univ. of Minnesota

  28. Owamp One-Way Latency Measurements using a mesh of nodes in each Abilene Router Node. Owamp: http://owamp.internet2.edu

  29. Iperf Gigabit Iperf tests available to NMS1 and 2; v6 and v4 multicast testing also available Allows network engineers to test from the local campus to the first Abilene node, greatly increasing their ability to troubleshoot problems

  30. Visual Backbone The Visual Backbone is a collection of data retrieved from the Abilene Juniper routers via XML. It polls the routers using the JunOScript tools to fetch data each hour. It stores this data and presents processed data of the current configuation. It also saves the historical data and provides both raw and processed data.

  31. Visual Backbone There are 3 ways to access the data: • Viewing the proessed data at: http://loadrunner.uits.iu.edu/~gcbrowni/Abilene • Using HTML Browsing Interface • Using Programatic SOAP/CGI interface More details are available at: http://loadrunner.uits.iu.edu/~gcbrowni/Abilene/raw-data.html

  32. SNMP Interface Statistics SNMP Collection is done in a distributed way on the NMS machines. Data for local routers and switches are captured and copied back to a central repository. The data collected is a Hi-Resolution (10 sec) capture of interface and environmental statistics In addition a SNMP router proxy similar to the Abilene router proxy is in the works to allow access to query SNMP variables on the router

  33. Internet2 Detective The Internet2 Detective is an application that provides information on the status and capabilities of a users current network connection. It currently shows: • Connectivity to a Internet 2 backbone network • Estimate of available bandwidth • Multicast connectivity

  34. Internet 2 Detective Server The Internet2 Detective users the Observatory framework. A modified echo server and IPerf server is used to provide the connectivity and performance information to each client. More information is available at: http://detective.internet2.edu

  35. Multicast Beacon • Mesh of all Abilene Router Nodes, running on NMS2s • Modified version of NLANR Multicast Beacon • Saves data into RRD Database • Graphs of Delay, Loss and Jitter statistics • Multicast Group 233.1.2.3 • Located at: http://loadrunner.uits.iu.edu/~neteng/nms/beacon

  36. NTP Service The NMS machines provide public NTP service to the community 2 Servers are available: • ntp-e.abilene.ucaid.edu located in New York • ntp-w.abilene.ucaid.edu located in Sunnyvale These servers use a mesh of stratum 1 servers for their time. These are located on each of the NMS4 machines. The stratum 1 servers receive their time from CDMA reception Stratum 1 service may be available for private peering

  37. Acknowledgments The applications and administration of the Abilene Observatory is the work of a large group of people: Jeff Boote, Eric Boyd , Prasad Calyam, Mark Fullmer, Chris Heermann, Russ Hobby, John Moore, Bob Riddle, Dan Pritts, Stanislav Shalunov, Richard Summerhill, Matt Zekauskas and the entire Abilene NOC

  38. The Observatory within APAN We’re very interested in the possibility of creating a similar program within the APAN membership The Planetlab project has already expressed their desire to place planetlab machines in the Asian Pacific region A good mesh of observatory space will allow for greater coordination with US researchers on various measurement, network research, and security related projects Kitatsuji-san’s previous analysis of the HD testing shows how beneficial it is to have test machines at each hop in the network

  39. Questions? Comments? Please feel free to direct any questions to: Chris Robb - chrobb@indiana.edu or Chris Small - chsmall@indiana.edu Thank you!

More Related