1 / 51

Manchester University Tiny Network Element Monitor (MUTiny NEM) A Network/Systems Management Tool

Manchester University Tiny Network Element Monitor (MUTiny NEM) A Network/Systems Management Tool Dave McClenaghan, Manchester Computing George Neisser, Manchester Computing. 1. Introduction . MUTiny Overview. - Runs on commodity hardware. - Coded in Perl, Perl/Tk.

ira-hooper
Download Presentation

Manchester University Tiny Network Element Monitor (MUTiny NEM) A Network/Systems Management Tool

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Manchester University Tiny Network Element Monitor(MUTiny NEM) • A Network/Systems Management Tool • Dave McClenaghan, Manchester Computing • George Neisser, Manchester Computing

  2. 1. Introduction.MUTiny Overview. • - Runs on commodity hardware. • - Coded in Perl, Perl/Tk. • - Easy to install, use and maintain. • - Free (unsupported) to academia. • - Subject to ongoing development.

  3. Network Management Overview.NMS Components: • - A Manager, running NM applications. • - A set of managed nodes. • (The managed domain) • - Defined management information (MIBs). • - A Network Management protocol (SNMP).

  4. 2. The MUTiny NM Model. • Platform: • PC (Running Linux). • Description: • A Network/Systems Management tool. • Coded entirely in Perl and Perl/Tk.

  5. MUTiny Applications. • - A Network Element Monitor/Manager. • - A Unix Systems Monitor/Manager. • - A MIB data collection and reporting tool.

  6. MUTiny NM Applications. • - Graphically display the domain topology. • - Monitor and report node status changes.

  7. MUTiny NM Applications (continued). • - Display monitored node information. • - Collect and report network statistics.

  8. MUTiny Network Management. Key areas: - Domain Management. - Event Management. - Network Statistics.

  9. MUTiny NEM front end. • _menu bar • _host attributes • _domain status • _topology display • _session bar The GUI

  10. 2.1 Domain Management.The Managed Domain. The set of all monitored nodes. • Monitored Node. • Any network device, router, switch, etc. • That is regularly polled for management • information.

  11. Domain Topology Display. • ICMP Status • (Background) • Green OK. • Red No Echo. • Amber Problem. • Clear Unknown.

  12. Domain Topology Display. • SNMP Status • (Foreground) • Black OK. • Blue No SNMP. • Grey Unknown.

  13. Topology Management. • The initial domain topology display

  14. Topology Management. • Add Node Window • - No Auto-Discovery, nodes added by choice.

  15. Topology Management. • - Node and ‘path’ nodes are added to the display.

  16. Topology Management. • - Path is determined by traceroute.

  17. Topology Management. • Change Symbol/Label Window

  18. Connectivity Status Polling. • ICMP connectivity determined by ping. • SNMP connectivity determined by sysUpTime.

  19. Configuring Status Poll Parameters. • Figure 2.2a Interface Polling Parameters

  20. Configuring Status Poll Parameters. • Figure 2.2b General Polling Parameters • - Fully configurable polling. • - Poll ‘Back Off’ options.

  21. 4.2 Event Management.Event Definition. • 2.2 Event Management • Connectivity Events. • ICMP:Node_No_Echo • ICMP:Node_OK • ICMP:Node_Problem • ICMP:Status_Unknown • SNMP:No_Response • SNMP:OK • SNMP:Status_Unknown

  22. Network Event Logging. • Figure 2.3a Network Event Log Window

  23. Network Event Alarms. • Figure 2.3b • A Pop-Up Alarm • Pop-Ups may be accompanied with an optional Audible_Alarm (bell).

  24. Event/Action Management.* • Event Configuration Window

  25. Event/Action Management.* • Event/Source Configuration Window

  26. Event Time Frames. • Contact/Frame Configuration Window

  27. 2.3 Network Statistics.MIB Data Collection. • Figure 2.4 MIB Data Collection Mechanism • Data stored in: • */nemdata/mibdata/<yearmon>/<Datafile>

  28. Data Storage. • Time Object Value PI • 953036400 ifOutOctets.6 84118 60 • 953036400 ifOutOctets.7 13275 60 • 953037000 ifInOctets.8 7219 60 • 953037000 ifInOctets.5 14303 60 • 953037000 ifInOctets.6 18287 60 • Sample data stored in: • */nemdata/mibdata/200002/gw-site

  29. Configuring MIB Data Collection. • Figure 2.5a • Collection Configuration Window

  30. Node Data Collection. • Figure 2.5b Node Collection Window

  31. Node Data Collection.The Storage Interval. • A multiple of the sampling-interval, eg. • samp-int = 60 seconds • store-int = 15 min • - This greatly reduces the amount of • disk space required to store the data • ( by a factor of 15 in this case ).

  32. Node Data Collection.The Store-Identifier. • The store-id, if set, stores object-id as: • <mib-obj>.<store-id> • eg. ifInOctets.Liv3 • - Useful if instance-id prone to change.

  33. Node Data Collection. • Test Collection Window

  34. ‘Change Control’.* • ‘Anchors’collection to IP_addr or Phys_addr

  35. Reporting Network Statistics. • Figure 4.11 The Reporting Mechanism. • The reports are generated from user-defined • Report Parameter Files (RPF’s). • # nemReport -r my.rpf

  36. Reporting Network Statistics. • The Report Template.

  37. Reporting Network Statistics. • Figure 4.12b • Sample ‘Fixed Column’ Report • - Variable and graphical* formats supported.

  38. 4.4 Monitoring MUTiny. • Figure 4.14 • The Host System Attributes Area • nemNEMPoll self checks: • - NEM Processes. • - Host system metrics.

  39. MUTiny Self Monitoring. • NEM Self Monitor Configuration Window

  40. 3. WWW Cache Status Monitoring.3.1 Caching Service Configuration. • Figure 3.1 • The Operation of the UK National • JANET Caching Service

  41. 3.2 Caching Service Operation. • Need to know for each node: • - Network accessibility. • - CPU loading. • - Memory utilisation. • - Disk utilisation. • - Squid application status.

  42. Network Accessibility. • Figure 3.2 Manchester Main Window

  43. System Metric Monitoring. • Figure 3.3 Manchester Cache Systems Window

  44. System Metric Monitoring. • Display indicates: • - If the machine is pingable. • - If SNMP is operational. • - The CPU loading. • - Memory utilisation. • - Disk utilisation. • - Critical process status ( squid ).

  45. System Metric Monitoring.Prerequisites. • The UCD-SNMP mechanism • The monitored host must be running the • UCD-SNMP agent software.

  46. System Metric Monitoring. • System Polling Configuration Window

  47. System Metric Monitoring. • Disk Statistics Window

  48. System Metric Monitoring. • A Pop_Up Alarm • Optional Audible_Alarm (bell).

  49. System Metric Monitoring. • Figure 5.5b • Domain Status Section • Indicates most critical entry in each column.

  50. Real Time Metrics. • ‘top’ Metric Window

More Related