1 / 32

Nagios Is Down and Your Boss Wants to See You

Learn about Nagios high availability options, including the use of DRBD and Pacemaker. Discover how this setup can prevent downtime and maintain data consistency between nodes.

channer
Download Presentation

Nagios Is Down and Your Boss Wants to See You

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nagios Is Down and Your Boss Wants to See You Andrew Widdersheim awiddersheim@inetu.net

  2. Nooooooooooo!!!

  3. Breaking News!

  4. Nagios High Availability Options • Merlin by op5 • Classic method described in Nagios Core documentation • Some type of virtualized solution like VMWare • or…

  5. Nagios High Availability • + • = Win

  6. DRBD magic

  7. DRBD magic • Linbit • Free • Runs in Kernel either by module or in the mainline code if Kernel is new enough • Each server gets its own independent storage • Able to maintain the data’s consistency between the nodes • Resource level fencing

  8. DRBD considerations • DRBD is as fast as the slowest node • Network latency • Replication over great distances can be done • DRBD proxy can increase performance over great distances but does cost money • Recommend using dedicated cross-over link for best performance • Protocol Choices • Protocol A: write IO is reported as completed, if it has reached local disk and local TCP send buffer. • Protocol B: write IO is reported as completed, if it has reached local disk and remote buffer cache. • Protocol C: write IO is reported as completed, if it has reached both local and remote disk.

  9. Pacemaker

  10. Pacemaker + DRBD + Nagios Nagios Apache NPCD NCSA Nagios Stuff rrdcached VIP 192.168.1.57 Filesystem ext4 DRBD Primary Secondary Resource Manager Pacemaker Messaging CoroSync / Heartbeat Hardware Node2 Node1

  11. Pacemaker + DRBD + Nagios Nagios Apache NPCD NCSA Nagios Stuff rrdcached VIP 192.168.1.57 Filesystem ext4 DRBD Primary Secondary Resource Manager Pacemaker Messaging CoroSync / Heartbeat Hardware Node2 Node1

  12. Pacemaker + DRBD + Nagios Nagios Apache NPCD NCSA rrdcached 192.168.1.57 ext4 DRBD Secondary Primary Resource Manager Pacemaker Messaging CoroSync / Heartbeat Hardware Node2 Node1

  13. Pacemaker and Nagios

  14. Pacemaker and Nagios primitive p_fs_nagios ocf:heartbeat:Filesystem \ params device="/dev/drbd/by-res/r1" directory="/drbd/r1" fstype="ext4“ options="noatime" \ op start interval="0" timeout="60s" \ op stop interval="0" timeout="180s" \ op monitor interval="30s" timeout="40s" primitive p_nagios lsb:nagios \ op start interval="0" timeout="180s" \ op stop interval="0" timeout="40s" \ op monitor interval="30s" \ meta target-role="Started" group g_nagios p_fs_nagios p_nagios_ip p_nagios_bacula p_nagios_mysql \ p_nagios_rrdcached p_nagios_npcd p_nagios_nsca p_nagios_apache \ p_nagios_syslog-ng p_nagios \ meta target-role="Started"

  15. Pacemaker and Nagios

  16. Pacemaker considerations • Redundant communication links are a must • Recommend use of crossover to help accomplish this • Init scripts for Nagios must be LSB compliant… some are not

  17. What to replicate? • Configuration • Host • Service • Multi check command files • Webinject command files • PNP4Nagios RRD’s • Nagios log files • retention.dat • Mail Queue (eh…)

  18. Everything else? • Binaries and main configuration files installed using packages independently on each server • Able to update one node at a time • Easy to roll back should there be an issue • Version/change management • Consistent build process • NDO and MySQL hosted on separate HA cluster

  19. RPM’s • Build and maintain our own RPM’s • Lets us configure everything to our liking • Lets us update at our own pace • Controlled through SVN with a post-commit to automatically update our own Nagios repository with new packages/updates. Then it is as simple as doing “yum update” on your servers. • A lot of upfront work but was worth it

  20. How has this helped? Have been able to repair, upgrade and move hardware with minimal downtime Updated OS and restart server with minimal downtime Able to update to 3.4.1 and promptly patch issue affecting Nagios downtime’s that was not caught in QA CGI pages of death

  21. What doesn’t this solve? Having an HA cluster is great but there are still things that can go wrong having a cluster does not solve Configuration issues are probably the most prevalent thing we run into that might bring down Nagios without there being a major hardware/DC issue We make use of NagiosQL which does a backup when a configuration is changed. This allows us to rollback unwanted changes but isn’t the best.

  22. Two is better than one Setting up another cluster for “development” with similar hardware and software is a great way to test things outside of production Lets you spot potential problems before they become a problem

  23. Monitoring your cluster • check_crm • http://exchange.nagios.org/directory/Plugins/Clustering-and-High-2DAvailability/Check-CRM/details • check_drbd • http://exchange.nagios.org/directory/Plugins/Operating-Systems/Linux/check_drbd/details • check_heartbeat_link • http://exchange.nagios.org/directory/Plugins/Operating-Systems/Linux/check_heartbeat_link/details

  24. Gotcha’s • RPM’s and symlinks in an HA solution are bad • Symlink /usr/local/nagios/etc/ -> /drbd/r1/nagios/etc when node is secondary and you update RPM your symlink will get blown away • Restarting services controlled by Pacemaker should be done within Pacemaker crm resource restart p_nagios

  25. Quick Stats • Thousands of host and service checks • Average check latency ~.300 sec • Average checks per second ~70 • Mostly active checks polling every 5 minutes • DL360 G5 • 6 146GB 10k SAS drives in RAID10 • 2 quad core E5450 @ 3.00GHz • 8GB Memory

  26. Tuning • RAM disk for check results queue, NPCD queue, objects.cache and status.dat • NDOUtils with async patch • Built in since version 1.5 • Limit what you send to NDOUtils • Bulk Mode with npcdmod • rrdcached  • Restarting Nagios through external command eventually resulted in higher latencies for some reason • Large installation tweaks • Disable environment macros • A lot of trial and error with scheduling and reaper frequencies • Small amount of check optimization • Measuring Nagios performance using PNP4Nagios is a must

  27. RAM disk + ndo-async + rrdcached

  28. non-external command file restarts

  29. nsca-2.9

  30. One Year’s Progress

  31. How we run today

  32. Quick Stats Questions?

More Related