Check out the project on Github.

I recently published a new project for repeatable filesystem benchmarking. This code is based on the benchmarks available in the AWS Mountpoint-s3 project and the Flexible I/O Tester, made into a more general purpose utility for benchmarking arbitrary filesystems.

The benchmark consists of a mix of read and write workloads, each run for ten iterations. The average result of the ten iterations is reported as the final result of the benchmark.

Read workload

Read performance is measured using two aspects, throughput and latency. For the throughput test, fio simulates IO workloads for both sequential read or random read for 30 seconds. Latency is measured as the time to first byte by running workloads that read one byte off of existing files and measure the time it takes to complete the operation. Each of the tests is defined in a separate .fio file, and the file name indicates what the test case is for.

Each workload is run using both a 5MB and a 100GB file, and uses different configurations:

  • four_threads: running the workload concurrently by spawning four fio threads to do the same job.
  • direct_io: bypassing kernel page cache by opening the files with O_DIRECT option. This option is only available on Linux.

Write workload

Write throughput is measured using fio to simulate by sequential writes using direct and buffered I/O.

Check out the project on Github.