UW SSEC Lustre Statistics How-To
- 1 Introduction
- 2 Hardware Requirements
- 3 Software Requirements
- 4 Building the Lustre Monitoring Deployment
This guide will take the user step-by-step through the Lustre Monitoring deployment that the Space Science and Engineering Center uses for monitoring all of its Lustre file systems. The author of this guide is Andrew Wagner (firstname.lastname@example.org).
Any existing server can be used for a proof of concept version of this guide. The requirements for several thousands checks per minute are low - a small VM can easily handle the load.
Our productions server can easily handle ~150k checks per minute and from a processing/disk I/O perspective can handle much more. Here are the specs:
- Dell PowerEdge R515
- 2x 8-Core AMD Opteron 4386
- 300GB RAID1 15K SAS
- 200GB Enterprise SSD
- 64GB RAM
- Centos 6 x86_64
- Centos 6 EPEL Repository
- Configuration Management System (Puppet, Ansible, Salt, Chef, etc)
- This makes check deployment easy
Building the Lustre Monitoring Deployment
Setting up an OMD Monitoring Server
The first thing that we needed for our new monitoring deployment was a monitoring server. We were already using Check_MK with Nagios on our older monitoring server but the Open Monitoring Distribution nicely ties all of the components together. The distribution is available at http://omdistro.org/ and installs via RPM.
On a newly deployed Centos6 machine, I installed the OMD-1.20 RPM. This takes care of all of the work of installing Nagios, Check_MK, PNP4Nagios, etc.
After installation, I created the new OMD monitoring site:
omd create ssec
This creates a new site that runs its own stack of Apache, Nagios, Check_MK and everything else in the OMD distribution. Now we can start the site:
omd start ssec
We chose to setup LDAPS authentication versus our Active Directory server to manage authentication. There is a good discussion of how to do this here: https://mathias-kettner.de/checkmk_multisite_ldap_integration.html
Additionally, we setup HTTPS for our web access to OMD: http://lists.mathias-kettner.de/pipermail/checkmk-en/2014-May/012225.html
At this point, you can start configuring your monitoring server to monitor hosts! Check_MK has a lot of configuration options, but it's a lot better than managing Nagios configurations by hand. Fortunately, Check_MK is widely used and well documented. The Check_MK documentation root is available at http://mathias-kettner.de/checkmk.html.
Deploying Agents to Lustre Hosts
To operate, the Check_MK agent on hosts runs as an xinetd service with a config file at /etc/xinetd.d/check_mk. That file includes the IP addresses allowed to access the agent in the only_from parameter. The OMD distribution comes with Check_MK agent RPMs. I rebuilt the RPM using rpmrebuild to include our updated IP addresses for our monitoring servers.
After rebuilding the RPM, push out the RPM to all hosts that will be monitored. We use a custom repository and Puppet for managing our existing software, so adding the RPM to the repo and pushing out via Puppet can be done with a simple module.
After deployment, we can verify the agents work by adding them to Check_MK via the GUI or configuration file and inventorying them. This will allow us to monitor a wide array of default metrics such as CPU Load, CPU Utilization, Memory use, and many others.
Writing Local Checks to Run via Check_MK Agent
Now that the Check_MK agents are deployed to the Lustre servers, we can add Check_MK local agent checks to measure whatever we want. The documentation for local checks is here: http://mathias-kettner.de/checkmk_localchecks.html.
The output of the check has to have a Nagios status number, Name, Performance Data, and Check Output.
Check out the examples in the Check_MK documentation for formatting of output. You can use whatever language your server supports to execute the local check. At SSEC, Scott Nolin has implemented several Perl scripts to poll Lustre statistics and output in the Check_MK format. You can read more about the checks here: http://wiki.opensfs.org/Lustre_Statistics_Guide.
Check_MK RRD Graphs
Once you start collecting this performance data, OMD automatically uses PNP4NAGIOS to create RRD graphs for each collected metric. Check_MK then will display these RRDs in the monitoring interface. This can be useful for small scale testing where you are only collecting a few tens of metrics. However, a thorough stat collection on large Lustre file systems can yield hundreds or even thousands of individual metrics. Check_MK and PNP4NAGIOS are thoroughly outclassed when asked to display such a large number of RRD graphs.
Thus, we turn to the Graphite/Carbon metric storage system.
The Graphite/Carbon software package collects metrics and stores them in Whisper databases files. Whisper files are similar to RRD files in that they have a defined size and fixed constraints on how the file manages time series data as time passes.
The installation and basic setup of Graphite and Carbon is pretty easy. We used the version of Graphite found in EPEL.