Benchmarking Working Group: Difference between revisions

From OpenSFS Wiki
Jump to navigation Jump to search
Line 44: Line 44:


[[http://wiki.opensfs.org/MD_BMG MPEE_BWG]]
[[http://wiki.opensfs.org/MD_BMG MPEE_BWG]]
The task of the BWG Metadata Performance Evaluation Effort (MPEE) group is to:
# Build/select tools that will allow evaluation of File System Metadata performance and scalability
# The tools will help detect pockets of Metadata low performance in cases when users complain of extreme slowness  of MD operations
# Benchmark tools will support: POSIX, MPI, and Transactional operations (for CEPH and DAOS)
# Address the very high end HPC as well as small and medium installations benchmark needs
# Tools applicable to Lustre and: CEPH, GPFS…
'''Current MPEE proposed list of benchmarks''':
# mdtest – widely used in HPC
# fstest - used by pvfs/OrangeFS community
# Postmark and MPI version - old NetApp benchmark
# Netmist and MPI version – used by SPECsfs
# Synthetic tools – used by LANL, ORNL
# MDS-Survey - Intel’s metadata workload simulator.
# Any known open source metadata tools used in HPC
# Add new Lustre statistics specific to MD operations.
'''MPEE Usecases'''
* '''mdtest''': test file MD operations on MDS: open, create, lookups, readdir; used in academia and as a comparison tool of FS MD.
* '''fstest''': small I/O’s and small files as well as lookups, targeting both MDS and OSS operations and MD HA for multiple MDS’s.
* '''Postmark''': old NetApp benchmark – I built an MPI version; it is used to measure MD operations and file size scalability and files per directory scalability.
* '''Netmist''': used to model any workload from statistics including all MD operations and file operations. Can model Workload objects for I/O performance mixes and combination of I/O and MD. Suitable for initial evaluation of storage as well as for performance troubleshooting.
'''MPEE Proposed Roadmap'''
* Collect benchmark tools candidates from OpenSFS
* Evaluate all the tools and the workloads that can benchmarked
* Recommend a small set of MD benchmark tools to cover the majority of MD workloads
* Collect stats from users of MD benchmarks 
* Build scripts to allow ease of use of the recommended tools
* Write documentation for troubleshooting MD performance problems using the toolset 
* Create a special website for MD tools
'''MPEE Asks from OpenSFS'''
* Share any open source synthetic benchmarks code
* Share a list of MD benchmark tools they currently use to allow select the most suitable and used candidates
* Share MD operations tested to allow build Netmist workload objects
* Share the MD workloads that create pain points to Lustre FS
* Share cases of poor MD performance workloads and applications
Return to [[http://wiki.opensfs.org/Benchmarking_Working_Group Benchmarking_Working_Group]] page.

Revision as of 09:29, 6 September 2013

Benchmarking Working Group Wiki Pages

Information about the BWG is found at the at opensfs.org website

LUG 2013 OpenSFS BWG Update Presentation


Benchmarking Working Group Tasks



File System I/O hero run best practices

Lead: Ben Evans
Members: Mark Nelson, Ilene Carpenter, Rick Roloff, Nathan Rutman, Liam Forbes

Hero Run Best Practices

I/O Workload Characterization

Lead: Pietro Cicotti
Members: Ilene Carpenter, Rick Mohr, Mike Booth, Ben Evans

Application I/O kernel extraction

Lead: Ilene Carpenter
Members: Jeff Layton, Pietro Ciccotti, Bobbie Lind

File System Monitoring

Lead: Liam Forbes
Members: Alan Wild, Andrew Uselton, Ben Evans, Cheng Shao, Jeff Garlough, Jeff Layton, Mark Nelson, Nic Henke

Metadata Performance Evaluation

Lead: Sorin Fabish
Members: Branislav Radovanovic, Rick Roloff, Cheng Shao, Wang Yibin, Keith Mannthey, Bobbie Lind, Greg Farnum

[MPEE_BWG]