Lustre Monitoring and Statistics Guide

From OpenSFS Wiki
Revision as of 10:03, 14 January 2015 by Scottn (talk | contribs)
Jump to navigation Jump to search

Introduction

This guide is by Scott Nolin ([email protected]), of the University of Wisconsin Space Science and Engineering Center.

There are a variety of useful statistics and counters available on Lustre servers and clients. This is an attempt to detail some of these statistics and methods for collecting and working with them.

This does not include Lustre log analysis.

The presumed audience for this is system administrators attempting to better understand and monitor their Lustre file systems.

Adding to This Guide

If you have improvements, corrections, or more information to share on this topic please contribute to this page. Ideally this would become a community resource.

Lustre Versions

This information is based on working primarily with Lustre 2.4 and 2.5.

Reading /proc vs lctl

'cat /proc/fs/lustre...' vs 'lctl get_param'

With newer Lustre versions, 'lctl get_pram' is the standard and recommended way to get these stats. This is to insure portability. I will use this method in all examples, a bonus is it can be often be a little shorter syntax.

Data Formats

Format of the various statistics type files varies (and I'm not sure if there is any reason for this). The format names here are entirely *my invention*, this isn't a standard for Lustre or anything.

It is useful to know the various formats of these files so you can parse the data and collect for use in other tools.

Stats

What I consider a "standard" stats files include for example each OST or MDT as a multi-line record, and then just the data.

Example:

obdfilter.scratch-OST0001.stats=
snapshot_time             1409777887.590578 secs.usecs
read_bytes                27846475 samples [bytes] 4096 1048576 14421705314304
write_bytes               16230483 samples [bytes] 1 1048576 14761109479164
get_info                  3735777 samples [reqs]

snapshot_time = when the stats were written

For read_bytes and write_bytes:

  • First number = number of times (samples) the OST has handled a read or write.
  • Second number = the minimum read/write size
  • Third number = maximum read/write size
  • Fourth = sum of all the read/write requests in bytes, the quantity of data read/written.

Jobstats

Jobstats are slightly more complex multi-line records. Each OST or MDT also has an entry for each jobid (or procname_uid perhaps), and then the data.

Example:

obdfilter.scratch-OST0000.job_stats=job_stats:
- job_id:          56744
  snapshot_time:   1409778251
  read:            { samples:       18722, unit: bytes, min:    4096, max: 1048576, sum:     17105657856 }
  write:           { samples:         478, unit: bytes, min:    1238, max: 1048576, sum:       412545938 }
  setattr:         { samples:           0, unit:  reqs }  punch:           { samples:          95, unit:  reqs }
- job_id: . . . ETC

Notice this is very similar to 'stats' above.

Single

These really boil down to just a single number in a file. But if you use "lctl get_param" you get an output that is nice for parsing. For example:

[COMMAND LINE]# lctl get_param osd-ldiskfs.*OST*.kbytesavail


osd-ldiskfs.scratch-OST0000.kbytesavail=10563714384
osd-ldiskfs.scratch-OST0001.kbytesavail=10457322540
osd-ldiskfs.scratch-OST0002.kbytesavail=10585374532

Histogram

Some stats are histograms, these types aren't covered here. Typically they're useful on their own without further parsing(?)


  • brw_stats
  • extent_stats


Interesting Statistics Files

This is a collection of various stats files that I have found useful. It is *not* complete or exhaustive. For example, you will noticed these are mostly server stats. There are a wealth of client stats too not detailed here. Additions or corrections are welcome.

  • Host Type = MDS, OSS, client
  • Target = "lctl get_param target"
  • Format = data format discussed above
Host Type Target Format Discussion
MDS mdt.*MDT*.num_exports single number of exports per MDT - these are clients, including other lustre servers
MDS mdt.*.job_stats jobstats Metadata jobstats. Note that with lustre DNE you may have more than one MDT, so even if you don't it may be wise to design any tools with that assumption.
OSS obdfilter.*.job_stats jobstats the per OST jobstats.
MDS mdt.*.md_stats stats Overall metadata stats per MDT
MDS mdt.*MDT*.exports.*@*.stats stats Per-export metadata stats. The exports subdirectory lists client connections by NID. The exports are named by interfaces, which can be unweildy. See "lltop" for an example of a script that used this data well. The sum of all the export stats should provide the same data as md_stats, but it is still very convenient to have md_stats, "ltop" uses them for example.
OSS obdfilter.*.stats stats Operations per OST. Read and write data is particularly interesting
OSS obdfilter.*OST*.exports.*@*.stats stats per-export OSS statistics
MDS osd-*.*MDT*.filesfree or filestotal single available or total inodes
MDS osd-*.*MDT*.kbytesfree or kbytestotal single available or total disk space
OSS obdfilter.*OST*.kbytesfree or kbytestotal, filesfree, filestotal single inodes and disk space as in MDS version
OSS ldlm.namespaces.filter-*.pool.stats stats lustre distributed lock manager (ldlm) stats. I do not fully understand all of these stats. It also appears that these same stats are duplicated a single stats. My understanding of these stats comes from http://wiki.lustre.org/doxygen/HEAD/api/html/ldlm__pool_8c_source.html
OSS ldlm.namespaces.filter-*.lock_count single number of locks
OSS ldlm.namespaces.filter-*.pool.granted single lustre distributed lock manager (ldlm) granted locks
OSS ldlm.namespaces.filter-*.pool.grant_rate single ldlm lock grant rate aka 'GR'
OSS ldlm.namespaces.filter-*.pool.cancel_rate single ldlm lock cancel rate aka 'CR'
OSS ldlm.namespaces.filter-*.pool.grant_speed single ldlm lock grant speed = grant_rate - cancel_rate. You can use this to derive cancel_rate 'CR'. Or you can just get 'CR' from the stats file I assume.

Working With the Data

Packages, tools, and techniques for working with Lustre statistics.

Open Source Monitoring Packages


Build it Yourself

Here are basic steps and techniques for working with the Lustre statistics.

  1. Gather the data on hosts you are monitoring. Deal with the syntax, extract what you want
  2. Collect the data centrally - either pull or push it to your server, or collection of monitoring servers.
  3. Process the data - this may be optional or minimal.
  4. Alert on the data - optional but often useful.
  5. Present the data - allow for visualization, analysis, etc.

Some recent tools for working with metrics and time series data have made some of the more difficult parts of this task relatively easy, especially graphical presentation.

Here are details of some solutions tested or in use:

Collectl and Ganglia

Collectl supports Lustre stats. Note there have recently been some changes, Lustre support in collectl is moving to plugins: http://sourceforge.net/p/collectl/mailman/message/31992463 https://github.com/pcpiela/collectl-lustre

This process is not based on the new versions, but they should work similarly.

  1. collectl - does the gather by writing to a text file on the host being monitored
  2. ganglia does the collect via gmond and python script 'collectl.py' and present via ganglia web pages - there is no alerting.

See https://wiki.rocksclusters.org/wiki/index.php/Roy_Dragseth#Integrating_collectl_and_ganglia


Perl and Graphite

Graphite is a very convenient tool for storing, working with, and rendering graphs of time-series data. At SSEC we did a quick prototype for collecting and sending MDS and OSS data using perl. The choice of perl is not particularly important, python or the tool of your choice is fine.

Software Used:

  • Graphite and Carbon - http://graphite.readthedocs.org/en/latest/
  • Lustrestats.pm - perl module to parse different types of lustre stats, used by lustrestats scripts
  • lustrestats scripts - these are simply run every minute via cron on the servers you monitor. For the SSEC prototype we simply sent text data via a TCP socket. The check_mk scripts in the next section have replaced these original test scripts.
  • Grafana - http://grafana.org - this is a dashboard and graph editor for graphite. It is not required, as graphite can be used directly, but is very convenient. I allows for not just ease of creating dashboards, but also encoruages rapid interactive analysis of the data. Note that elasticsearch can be used to store dashboards for grafana, but is not required.

check_mk and Graphite

Another option is instead of directly sending with perl, use a check_mk local agent check.

The local agent and pnp4nagios mean a reasonable infrastructure is already in place for alerting and also collecting performance data.

While collecting via perl allowed us to send the timestamp from the Lustre stats (when they exist) directly to Carbon, Graphite's data collection tool. When using the check_mk method this timestamp is lost, so timestamps are then based on when the local agent check runs. This will introduce some inaccuracy - a delay of up to your sample rate.

Collecting via both methods allows you to see this difference. This graph shows all the "export" stats summed for each method, with derivative applied to create a rate of change. "CMK" is the check_mk data and "timestamped" was from the perl script. Plotting the raw counter data of course shows very little, but with this derived data you can see the difference.

This data was sampled once per minute:

Cmk-perl.PNG

For our uses at SSEC, this was acceptable. Sampling much more frequently will of course make the error smaller.


Grafana Lustre Dashboard Screenshots:

Metadata for multiple file systems. Dashboard for a lustre file system.

Logstash, python, and Graphite

Brock Palen discusses this method: http://www.failureasaservice.com/2014/10/lustre-stats-with-graphite-and-logstash.html

Collectd plugin and Graphite

This talk mentions a custom collectd plugin to send stats to graphite: http://www.opensfs.org/wp-content/uploads/2014/04/D3_S31_FineGrainedFileSystemMonitoringwithLustreJobstat.pdf

Unsure if the source for that plugin is available.

A Note about Jobstats

If using a whisper or RRD-file based solution, jobstats may not be a great fit. The strength of RRD or Whisper files are you have a set size for each metric collected. If your metrics are now per-job as opposed to only per-export or per-server, this means your number of metrics is now growing without bound.

Solutions anyone?


References and Links