Hero Run Best Practices: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 1: | Line 1: | ||
[[Benchmarking_Working_Group|Back to the main BWG page]] | [[Benchmarking_Working_Group|Back to the main BWG page]] | ||
'''Lead:''' Ben Evans<br /> | |||
'''Members:''' Mark Nelson, Ilene Carpenter, Rick Roloff, Nathan Rutman, Liam Forbes | |||
<br /> | |||
[[What goes into a hero run? A list of terms and concepts]] | |||
The Hero Run team is tasked with: | The Hero Run team is tasked with: |
Revision as of 11:42, 6 March 2015
Lead: Ben Evans
Members: Mark Nelson, Ilene Carpenter, Rick Roloff, Nathan Rutman, Liam Forbes
What goes into a hero run? A list of terms and concepts
The Hero Run team is tasked with:
- Establishing a process to determine the peak streaming performance of a clustered filesystem (both read and write)
- Describing what the test is doing, and detailing why we chose it over other options
- Providing an optional form to detail the system that was tested (servers, targets, interconnect, clients, etc.)
The Hero Run team is optionally tasked with:
- Establishing a process to determine the peak random I/O performance of a clustered filesystem (both read and write)
- Describing what the test is doing, and detailing why we chose it over other options
- Establishing an algorithm to combine Streaming read+write, Random read+write into a single number.
The Hero Run team is not tasked with:
- Creating a Top500 list
- Determining scaling (though the tests can be used to establish that)
- Comparing vendors offerings, or maintaining a database of results.
Zero to Hero
What goes into a hero run? A list of terms and concepts
Single shared file performance
Tools
For testing standard POSIX-compliant filesystems, IOR will be used along with an MPI infrastructure. IOR is available Here . Client allocation in the cluster is up to the team performing the test.