Benchmarking Working Group: Difference between revisions

From OpenSFS Wiki
Jump to navigation Jump to search
No edit summary
 
(47 intermediate revisions by 12 users not shown)
Line 1: Line 1:
== Benchmarking Working Group Wiki Pages ==
__NOTOC__


Information about the BWG is found at the at [http://www.opensfs.org/get-involved/benchmarking-working-grouplink opensfs.org website]
{{Warning|As of March 4, 2016, the BWG no longer meets.}}
News from the BWG:
<pre>
Greetings,
  On behalf of Davesh and myself, I would like to thank you for your
past support of the OpenSFS Benchmarking Working Group. There will be
a conference call for the BWG Friday, March 4, at 2pm Eastern Time
(Dial-in 877-507-0723, passcode: 9927187#). We anticipate that it will
be our last. Time pressures on the members of the BWG have resulted in
very low attendance at the conference calls. As an alternative forum
for dicussing benchmarking issues and asking questions may we suggest:
- The Lustre-discuss mailing list
    http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
- The Lustre IRC
    http://webchat.freenode.net/?channels=lustre
If you feel strongly that biweekly conference calls should continue,
please do speak up. If you have any questions about the BWG or about
benchmarking open source, parallel, scalable file systems in general
do not hesitate to contact us:
Andrew C Uselton <[email protected]>
Davesh Tiwari <[email protected]>
Cheers,
The BWG co-chairs
</pre>


[http://wiki.opensfs.org/images/e/ec/LUG_2013_OpenSFS_BWG_update.pdf LUG 2013 OpenSFS BWG Update Presentation]




=== Benchmarking Working Group Tasks ===
=== Benchmarking Working Group Tasks ===


<br />
==== [[Hero Run Best Practices|File System I/O hero run best practices]] ====  
 
 
==== File System I/O hero run best practices ====
 
'''Lead:''' Ben Evans<br />
'''Members:''' Mark Nelson, Ilene Carpenter, Rick Roloff, Nathan Rutman, Liam Forbes
<br />
 
[[Hero Run Best Practices]]
 
==== I/O Workload Characterization ====
 
'''Lead:''' Pietro Cicotti<br />
'''Members:''' Ilene Carpenter, Rick Mohr, Mike Booth, Ben Evans
<br />


==== Application I/O kernel extraction ====
==== [[I/O Workload Characterization]] ====


'''Lead:''' Ilene Carpenter<br />
==== [[Application I/O kernel extraction]] ====
'''Members:''' Jeff Layton, Pietro Ciccotti, Bobbie Lind
<br />


==== [[BWG_File_System_Monitoring|File System Monitoring]] ====
==== [[BWG_File_System_Monitoring|File System Monitoring]] ====


'''Lead:''' Liam Forbes<br>
==== [http://wiki.opensfs.org/MD_BMG Metadata Performance Evaluation] ====
'''Members:''' Alan Wild, Andrew Uselton, Ben Evans, Cheng Shao, Jeff Garlough, Jeff Layton, Mark Nelson, Nic Henke
<br>


==== Metadata Performance Evaluation ====
== Benchmarking Working Group Wiki Pages ==
 
'''Lead:''' Sorin Fabish<br />
'''Members:''' Branislav Radovanovic, Rick Roloff, Cheng Shao, Wang Yibin, Keith Mannthey, Bobbie Lind, Greg Farnum
<br />
 
 
The task of the BWG Metadata Performance Evaluation Effort (MPEE) group is to:
# Build/select tools that will allow evaluation of File System Metadata performance and scalability
# The tools will help detect pockets of Metadata low performance in cases when users complain of extreme slowness  of MD operations
# Benchmark tools will support: POSIX, MPI, and Transactional operations (for CEPH and DAOS)
# Address the very high end HPC as well as small and medium installations benchmark needs
# Tools applicable to Lustre and: CEPH, GPFS…
 
 
'''Current MPEE proposed list of benchmarks''':
# mdtest – widely used in HPC
# fstest - used by pvfs/OrangeFS community
# Postmark and MPI version - old NetApp benchmark
# Netmist and MPI version – used by SPECsfs
# Synthetic tools – used by LANL, ORNL
# MDS-Survey - Intel’s metadata workload simulator.
# Any known open source metadata tools used in HPC
# Add new Lustre statistics specific to MD operations.
 
'''MPEE Usecases'''
* '''mdtest''': test file MD operations on MDS: open, create, lookups, readdir; used in academia and as a comparison tool of FS MD.
* '''fstest''': small I/O’s and small files as well as lookups, targeting both MDS and OSS operations and MD HA for multiple MDS’s.
* '''Postmark''': old NetApp benchmark – I built an MPI version; it is used to measure MD operations and file size scalability and files per directory scalability.
* '''Netmist''': used to model any workload from statistics including all MD operations and file operations. Can model Workload objects for I/O performance mixes and combination of I/O and MD. Suitable for initial evaluation of storage as well as for performance troubleshooting.
 
'''MPEE Proposed Roadmap'''
* Collect benchmark tools candidates from OpenSFS
* Evaluate all the tools and the workloads that can benchmarked
* Recommend a small set of MD benchmark tools to cover the majority of MD workloads
* Collect stats from users of MD benchmarks 
* Build scripts to allow ease of use of the recommended tools
* Write documentation for troubleshooting MD performance problems using the toolset 
* Create a special website for MD tools
 
'''MPEE Asks from OpenSFS'''
* Share any open source synthetic benchmarks code
* Share a list of MD benchmark tools they currently use to allow select the most suitable and used candidates
* Share MD operations tested to allow build Netmist workload objects
* Share the MD workloads that create pain points to Lustre FS
* Share cases of poor MD performance workloads and applications


Return to [[Benchmarking_Working_Group|http://wiki.opensfs.org/Benchmarking_Working_Group]] page.
==== [[BWG_Meeting_Minutes | Benchmarking Working Group Meeting Minutes]] ====

Latest revision as of 11:40, 20 April 2016


WARNING: As of March 4, 2016, the BWG no longer meets.

News from the BWG:

Greetings,
  On behalf of Davesh and myself, I would like to thank you for your
past support of the OpenSFS Benchmarking Working Group. There will be
a conference call for the BWG Friday, March 4, at 2pm Eastern Time
(Dial-in 877-507-0723, passcode: 9927187#). We anticipate that it will
be our last. Time pressures on the members of the BWG have resulted in
very low attendance at the conference calls. As an alternative forum
for dicussing benchmarking issues and asking questions may we suggest:
- The Lustre-discuss mailing list
    http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
- The Lustre IRC
    http://webchat.freenode.net/?channels=lustre
If you feel strongly that biweekly conference calls should continue,
please do speak up. If you have any questions about the BWG or about
benchmarking open source, parallel, scalable file systems in general
do not hesitate to contact us:
Andrew C Uselton <[email protected]>
Davesh Tiwari <[email protected]>
Cheers,
The BWG co-chairs


Benchmarking Working Group Tasks

File System I/O hero run best practices

I/O Workload Characterization

Application I/O kernel extraction

File System Monitoring

Metadata Performance Evaluation

Benchmarking Working Group Wiki Pages

Benchmarking Working Group Meeting Minutes