How to Use ‘fio’ to Measure the Speed of Data Reads/Writes on Storage Devices in Linux

How to Use FIO Utility for Storage I_O Performance Tests

When managing a Linux server, especially in scenarios related to databases, virtualization, or any I/O intensive tasks, understanding the performance of your storage device is crucial. One of the most reliable tools for this purpose is fio (Flexible I/O Tester). This tool is versatile and can simulate a variety of I/O workloads to test the performance of your storage devices.

In this guide, I will show how to use fio to measure the speed of data reads/writes on storage devices. Whether you’re using a dedicated server, a VPS server, or even cloud hosting, understanding your storage performance is essential.

Let’s get started.

Installing `fio`

Before we can run any tests, we need to ensure fio is installed on our Linux machine:

sudo apt update
sudo apt install fio

Basic “fio” Command

To run a basic test with fio, use the following command:

fio --name=test --ioengine=sync --rw=randwrite --bs=4k --numjobs=1 --size=1G --runtime=10m --time_based

Here’s a breakdown of the parameters:

  • –name=test: This names the job “test”.
  • –ioengine=sync: This specifies the I/O engine to be used.
  • –rw=randwrite: This sets the I/O pattern to random writes.
  • –bs=4k: This sets the block size to 4KB.
  • –numjobs=1: This runs one job/thread.
  • –size=1G: This sets the total size of I/O for the job to 1GB.
  • –runtime=10m: This limits the job to run for 10 minutes.
  • –time_based: This makes the job run for the specified time, regardless of the amount of I/O.

For example:

test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.27
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=4056KiB/s][w=1014 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=12345: Fri Oct 13 10:10:10 2023
  write: IOPS=1012, BW=4048KiB/s (4147kB/s)(600MiB/152022msec)
    clat (usec): min=2, max=1058, avg= 4.66, stdev= 2.31
     lat (usec): min=2, max=1058, avg= 4.68, stdev= 2.32
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[    3], 10.00th=[    3], 20.00th=[    4],
     | 30.00th=[    4], 40.00th=[    4], 50.00th=[    4], 60.00th=[    4],
     | 70.00th=[    4], 80.00th=[    5], 90.00th=[    5], 95.00th=[    5],
     | 99.00th=[    6], 99.50th=[    7], 99.90th=[   11], 99.95th=[   13],
     | 99.99th=[   21]
  lat (usec)   : 4=60.01%, 10=39.96%, 20=0.02%, 50=0.01%
  cpu          : usr=1.23%, sys=4.56%, ctx=154011, majf=0, minf=8
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,153600,0,0 short=153600,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=4048KiB/s (4147kB/s), 4048KiB/s-4048KiB/s (4147kB/s-4147kB/s), io=600MiB (629MB), run=152022-152022msec

Disk stats (read/write):
  sda: ios=0/153594, merge=0/789, ticks=0/615, in_queue=615, util=0.40%

Reading and Writing Tests

To measure both read and write performance, you can adjust the –rw parameter:

  • For random reads: –rw=randread
  • For random writes: –rw=randwrite
  • For sequential reads: –rw=read
  • For sequential writes: –rw=write
See also  How to Setup HardInfo for CPU and Memory Benchmarking on a Linux Machine

Interpreting the Results

Once the test completes, fio will provide a detailed output. The most important metrics to note are:

1. Test Configuration:

test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1

  • rw=randwrite: The test was set to perform random writes.
  • bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B: The block size for read, write, and total operations was set to 4KB.
  • ioengine=sync: The I/O operations were performed synchronously.
  • iodepth=1: The depth of the I/O operations queue was set to 1, meaning one operation is processed at a time.

2. Performance Metrics:

write: IOPS=1012, BW=4048KiB/s (4147kB/s)(600MiB/152022msec)

  • IOPS=1012: The storage device achieved an I/O performance of 1012 Input/Output Operations Per Second.
  • BW=4048KiB/s (4147kB/s): The bandwidth or data transfer rate was approximately 4048 KiB/s (or 4147 kB/s).
  • 600MiB/152022msec: A total of 600 MiB of data was written in 152,022 milliseconds (or about 152 seconds).

3. Latency Metrics:

clat (usec): min=2, max=1058, avg= 4.66, stdev= 2.31

  • clat (usec): This refers to the completion latency, which is the time taken to complete an I/O operation.
  • min=2, max=1058: The fastest I/O operation took 2 microseconds, while the slowest took 1058 microseconds.
  • avg= 4.66: On average, I/O operations took about 4.66 microseconds.
  • stdev= 2.31: The standard deviation, which measures the variability of the latency, was 2.31 microseconds.

4. Latency Percentiles:

The percentiles provide insights into the distribution of latency values. For instance:

  • 1.00th=[3]: 1% of the I/O operations had a latency of 3 microseconds or less.
  • 99.99th=[21]: 99.99% of the I/O operations had a latency of 21 microseconds or less.
See also  How to Use 'dd' to Measure the Speed of Data Reads/Writes on Storage Devices in Linux

5. CPU Usage:

cpu: usr=1.23%, sys=4.56%, ctx=154011, majf=0, minf=8

  • usr=1.23%: 1.23% of the CPU was used for user-level processes during the test.
  • sys=4.56%: 4.56% of the CPU was used for system-level processes.
  • ctx=154011: The number of context switches that occurred during the test.
  • majf=0, minf=8: Major and minor page faults. Major faults occur when data has to be retrieved from disk (not ideal), while minor faults retrieve data that is elsewhere in memory.

6. Disk Stats:

sda: ios=0/153594, merge=0/789, ticks=0/615, in_queue=615, util=0.40%

  • ios=0/153594: No read operations (ios=0) and 153,594 write operations were performed on the sda disk.
  • merge=0/789: No read merges and 789 write merges occurred. Merges refer to operations combined before being processed.
  • ticks=0/615: Time spent on reads and writes. In this case, 615 milliseconds were spent on writes.
  • in_queue=615: Total time the operations were queued.
  • util=0.40%: Disk utilization during the test.

Interpretation:

The storage device being tested has a decent performance for random write operations with a block size of 4KB. The IOPS and bandwidth values are satisfactory, and the average latency is relatively low. The CPU usage is minimal, indicating that the test did not overly strain the system. The disk stats show a high number of write operations, with a minimal amount of time spent on these operations, leading to a low disk utilization.

Commands Mentioned

  • sudo apt update – Updates the package list for upgrades.
  • sudo apt install fio – Installs the `fio` tool.
  • fio –name=test … – Runs a basic `fio` test with specified parameters.

FAQ

  1. What is `fio` used for?

    `fio` stands for Flexible I/O Tester and is a tool used to measure and visualize the I/O performance of storage devices on Linux systems. It can simulate various I/O workloads to test the performance of hard drives, SSDs, and other storage devices.

  2. How do I install `fio` on other Linux distributions?

    The installation command provided is for Debian-based distributions. For other distributions like CentOS, you can use `yum install fio` or `dnf install fio`. For Arch Linux, use `pacman -S fio`. Always refer to the official documentation of your distribution for specific installation instructions.

  3. Can `fio` be used on non-Linux systems?

    Yes, `fio` is versatile and can be used on various operating systems, including Windows, macOS, and FreeBSD. However, the installation and usage might differ slightly based on the platform.

  4. What is the difference between random and sequential I/O patterns?

    Random I/O refers to operations where data blocks are read or written in a non-sequential manner, often scattered across the storage device. Sequential I/O, on the other hand, refers to operations where data blocks are read or written in a consecutive, ordered manner. SSDs generally perform better with random I/O compared to traditional HDDs.

  5. How can I optimize my storage device for better performance?

    Optimizing storage performance can involve several strategies, including defragmenting HDDs, ensuring firmware is updated, using appropriate file systems, and ensuring alignment of SSD partitions. Additionally, monitoring and limiting the number of I/O-intensive tasks, optimizing application and database queries, and ensuring adequate RAM to reduce swap usage can also enhance performance.

See also  How to Setup Apache Benchmark to Perform a Stress Test on Linux

Conclusion

Understanding the performance of your storage device is crucial for optimal server operation, especially in I/O-intensive tasks. The fio tool provides a comprehensive way to measure and visualize the I/O performance of your storage devices on a Linux machine. By simulating various I/O workloads, you can gain insights into how your storage device performs under different conditions.

Whether you’re running a high-traffic website on Apache, managing a large database on Nginx, or optimizing a web application on LiteSpeed, understanding your storage’s I/O performance is invaluable. With the knowledge gained from these tests, you can make informed decisions about hardware upgrades, storage configurations, and other optimizations to ensure your server operates at its best.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *