MD BMG/LUG-abstract: Difference between revisions

From OpenSFS Wiki
Jump to navigation Jump to search
(Created page with "Abstract: For the last 2 years the Benchmark Working Group is working on the selection ofbenchmarks suitable for Parallel File Systems. One of the important benchm...")
 
No edit summary
 
Line 1: Line 1:
Abstract:            For the last 2 years the Benchmark Working Group is working on the selection ofbenchmarks suitable for Parallel File Systems. One of the important benchmarksis the Metadata benchmark and a special team was formed that is looking atdifferent performance benchmarks available in the open source. In this MetadataPerformance Evaluation Effort we analyze the performance data collected fromLustre FS using several benchmarks trying to understand which of the benchmarksare most relevant for the metadata performance of Lustre as well as GPFS.During the last 2 years we evaluated 5 benchmarks running on several Lustre FSyet although the measurements are relevant for metadata it is not clear that asingle benchmark can fit all the needs. In this presentation we will share ourtests results and discuss the merit of each of the benchmarks used and we wantto discuss with the wider user community the results and try to understand whatkind of metadata evaluation tool the users want to use. Our charter is tobuild/select tools that will allow evaluation of File System Metadataperformance and scalability for a wider user community and we want to test ourwork so far and see where we need to go to make our work useful.
For the last 2 years the Benchmark Working Group is working on the selection of benchmarks suitable for Parallel File Systems. One of the important benchmarks is the Metadata benchmark and a special team was formed that is looking at different performance benchmarks available in the open source. In this Metadata Performance Evaluation Effort we analyze the performance data collected from Lustre FS using several benchmarks trying to understand which of the benchmarks are most relevant for the metadata performance of Lustre as well as GPFS. During the last 2 years we evaluated 5 benchmarks running on several Lustre FS yet although the measurements are relevant for metadata it is not clear that a single benchmark can fit all the needs. In this presentation we will share our tests results and discuss the merit of each of the benchmarks used and we want to discuss with the wider user community the results and try to understand what kind of metadata evaluation tool the users want to use. Our charter is to build/select tools that will allow evaluation of File System Metadata performance and scalability for a wider user community and we want to test our work so far and see where we need to go to make our work useful.

Latest revision as of 09:05, 28 February 2014

For the last 2 years the Benchmark Working Group is working on the selection of benchmarks suitable for Parallel File Systems. One of the important benchmarks is the Metadata benchmark and a special team was formed that is looking at different performance benchmarks available in the open source. In this Metadata Performance Evaluation Effort we analyze the performance data collected from Lustre FS using several benchmarks trying to understand which of the benchmarks are most relevant for the metadata performance of Lustre as well as GPFS. During the last 2 years we evaluated 5 benchmarks running on several Lustre FS yet although the measurements are relevant for metadata it is not clear that a single benchmark can fit all the needs. In this presentation we will share our tests results and discuss the merit of each of the benchmarks used and we want to discuss with the wider user community the results and try to understand what kind of metadata evaluation tool the users want to use. Our charter is to build/select tools that will allow evaluation of File System Metadata performance and scalability for a wider user community and we want to test our work so far and see where we need to go to make our work useful.