fio参数解析
使用fio -help查看每个参数,具体的参数可以在官网查看,如下为几个常见的参数描述
filename=/dev/emcpowerb 支持文件系统或者裸设备,-filename=/dev/sda2或-filename=/dev/sdb
direct=1 测试过程绕过机器自带的buffer,使测试结果更真实
rw=randwread 测试随机读的I/O
rw=randwrite 测试随机写的I/O
rw=randrw 测试随机混合写和读的I/O
rw=read 测试顺序读的I/O
rw=write 测试顺序写的I/O
rw=rw 测试顺序混合写和读的I/O
bs=4k 单次io的块文件大小为4k
bsrange=512-2048 同上,提定数据块的大小范围
size=5g 本次的测试文件大小为5g,以每次4k的io进行测试
numjobs=30 本次的测试线程为30
runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止
ioengine=psync io引擎使用pync方式,如果要使用libaio引擎,需要yum install libaio-devel包
rwmixwrite=30 在混合读写的模式下,写占30%
group_reporting 关于显示结果的,汇总每个进程的信息
此外
lockmem=1g 只使用1g内存进行测试
zero_buffers 用0初始化系统buffer
nrfiles=8 每个进程生成文件的数量
ioengine参数,这个就是告诉 fio 使用什么样的方式去测试 I/O,我们需要根据业务的实际情况选择不同的类型,主要几个:
- libaio - Linux 原生的异步 I/O,这也是通常我们这边用的最多的测试盘吞吐和延迟的方法
- sync - 也就是最通常的 read / write 操作
- vsync - 使用 readv / writev,主要是会将相邻的 I/O 进行合并
- psync - 对应的 pread / pwrite,增量同步,一般sync是全量的
- pvsync / pvsync2 - 对应的 preadv / pwritev,以及 preadv2 / pwritev2
测试实践
MogDB数据库是以8kb为单位写入文件系统,所以测试本地盘,sata盘,ssd盘的顺序写 8kb的块 的能力分别是怎样的
-- 顺序写 单并发 8K 10G gaussdata盘为例
[root@test 8kseq]# more 8kseq.fio
[global]
bs=8k
ioengine=libaio
iodepth=4
size=10G
direct=1
runtime=60
directory=/gaussdata
filename=8kseq
[seq-write]
rw=write
stonewall
-- 顺序写 8K 10并发 每并发写1G gaussdata盘为例
[root@test 8kseq]# more 8kseq.fio.10
[global]
bs=8k
ioengine=libaio
iodepth=4
direct=1
runtime=100
directory=/gaussdata
nrfiles=1
filesize=1G
numjobs=10
[seq-write]
rw=write
stonewall
-- 顺序写 8K 50并发 每并发写1G gaussdata盘为例
[root@test 8kseq]# more 8kseq.fio.50
[global]
bs=8k
ioengine=libaio
iodepth=4
direct=1
runtime=100
directory=/gaussdata
nrfiles=1
filesize=1G
numjobs=50
[seq-write]
rw=write
stonewall
测试结果
本地盘
sata盘
ssd盘
以下列返回结果为实例分析
rand-write: (g=0): rw=write, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=sync, iodepth=4
fio-3.33
Starting 1 process
rand-write: Laying out IO file (1 file / 10240MiB)
note: both iodepth >= 1 and synchronous I/O engine are selected, queue depth will be capped at 1
Jobs: 1 (f=1): [W(1)][100.0%][w=51.9MiB/s][w=830 IOPS][eta 00m:00s]
rand-write: (groupid=0, jobs=1): err= 0: pid=58951: Wed Nov 9 15:06:02 2022
write: IOPS=885, BW=55.3MiB/s (58.0MB/s)(5534MiB/100001msec); 0 zone resets
clat (usec): min=872, max=43635, avg=1128.27, stdev=750.92
lat (usec): min=873, max=43635, avg=1128.71, stdev=750.92
clat percentiles (usec):
| 1.00th=[ 947], 5.00th=[ 988], 10.00th=[ 1012], 20.00th=[ 1037],
| 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123],
| 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1287],
| 99.00th=[ 1467], 99.50th=[ 1614], 99.90th=[ 2704], 99.95th=[ 5342],
| 99.99th=[40109]
bw ( KiB/s): min=46080, max=60160, per=100.00%, avg=56741.95, stdev=2691.56, samples=199
iops : min= 720, max= 940, avg=886.59, stdev=42.06, samples=199
lat (usec) : 1000=7.86%
lat (msec) : 2=91.92%, 4=0.16%, 10=0.02%, 50=0.04%
cpu : usr=0.08%, sys=1.03%, ctx=88572, majf=0, minf=7
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,88547,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=4
Run status group 0 (all jobs):
WRITE: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=5534MiB (5803MB), run=100001-100001msec
Disk stats (read/write):
dm-14: ios=0/89284, merge=0/0, ticks=0/98850, in_queue=98850, util=98.86%, aggrios=0/88735, aggrmerge=0/683, aggrticks=0/99534, aggrin_queue=1140, aggrutil=99.09%
dm-13: ios=0/88735, merge=0/683, ticks=0/99534, in_queue=1140, util=99.09%, aggrios=0/29578, aggrmerge=0/0, aggrticks=0/33014, aggrin_queue=380, aggrutil=34.27%
sdm: ios=0/29575, merge=0/0, ticks=0/33411, in_queue=540, util=33.42%
sdn: ios=0/29585, merge=0/0, ticks=0/33342, in_queue=540, util=34.27%
sdl: ios=0/29575, merge=0/0, ticks=0/32289, in_queue=60, util=30.98%
运行时,fio将显示已创建作业的状态。示例中为:
Jobs: 1 (f=1): [W(1)][100.0%][w=51.9MiB/s][w=830 IOPS][eta 00m:00s]
当前运行和执行I/O的线程数为1,当前打开的文件数(f=)为1。
第一组括号中的字符表示每个线程的当前状态,示例中为W,表示顺序写。当为R时,表示顺序读;r表示随机读;w表示随机写;M表示混合顺序读/写;m表示混合随机读/写。
第二组括号显示当前估计完成百分比,因为已经命令已经执行完,所以是100%。第三组括号分别显示读取和写入I/O速率。第四组括号以带宽和IOPS表示第三组括号的内容,最后,将显示预估的作业剩余运行时间。
当fio完成时(或被Ctrl-C中断),它将按顺序显示每个线程、每组线程和每个磁盘的数据。
write: IOPS=885, BW=55.3MiB/s (58.0MB/s)(5534MiB/100001msec); 0 zone resets
IOPS是每秒执行的平均I/O。BW是平均带宽速率也叫吞吐量,示例中 55.3MiB/s = 58.0MB/s。最后两个值为:(执行的总I/O / 线程运行时间)。
clat (usec): min=872, max=43635, avg=1128.27, stdev=750.92
lat (usec): min=873, max=43635, avg=1128.71, stdev=750.92
有的上面还有slat,slat是提交I/O所用的时间,即提交延迟(最小值,最大值,平均值,标准偏差)。在上面的示例中,单位是纳秒。
clat与slat的名称类似,表示从提交到完成I/O的时间。
lat表示总延迟。与slat和clat的名称类似,表示从fio创建I/O单元到完成I/O操作的时间,得到的就是响应时间。
bw ( KiB/s): min=46080, max=60160, per=100.00%, avg=56741.95, stdev=2691.56, samples=199
bw表示基于sample的带宽统计。包括采样数(samples)和该线程占用总聚合带宽的近似百分比(per)。
iops : min= 720, max= 940, avg=886.59, stdev=42.06, samples=199
基于sample的IOPS统计信息。与bw同名。
lat (usec) : 1000=7.86%
lat (msec) : 2=91.92%, 4=0.16%, 10=0.02%, 50=0.04%
I/O完成延迟的分布。这是从I/O离开fio到完成的时间。本例中,1000=7.86%表示 7.86% 的I/O在1000us以下完成。
cpu : usr=0.08%, sys=1.03%, ctx=88572, majf=0, minf=7
CPU使用率。用户和系统时间使用率,以及这个线程经历的上下文切换的数量(ctx),最后是主要(majf)和次要(minf)页面错误的数量。
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
在作业生命周期内I/O深度的分布。数字被划分为2的幂,每个条目覆盖从该值到低于下一个条目的深度,例如,1=覆盖从1到2的深度。
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
submit在一次提交调用中提交了多少个I/O。每个条目都表示该数值及其以下,直到上一个条目——例如,4=100%表示我们在每次提交调用中提交了1到4个I/O。
complete与上述submit编号相同,但用于完成。
issued rwts: total=0,88547,0,0 short=0,0,0,0 dropped=0,0,0,0
发出的读/写/修剪请求的数量,以及其中有多少是短请求或丢弃的。
列出每个client后,将打印组统计信息。看起来像这样:
Run status group 0 (all jobs):
WRITE: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=5534MiB (5803MB), run=100001-100001msec
bw表示此组中线程的聚合带宽,然后是此组中所有线程的最小和最大带宽。
io表示该组中所有线程执行的I/O,格式与bw相同。
run表示此组中线程的最短和最长运行时间。
最后,打印磁盘统计信息。这是Linux特有的,看起来像这样:
Disk stats (read/write):
dm-14: ios=0/89284, merge=0/0, ticks=0/98850, in_queue=98850, util=98.86%, aggrios=0/88735, aggrmerge=0/683, aggrticks=0/99534, aggrin_queue=1140, aggrutil=99.09%
dm-13: ios=0/88735, merge=0/683, ticks=0/99534, in_queue=1140, util=99.09%, aggrios=0/29578, aggrmerge=0/0, aggrticks=0/33014, aggrin_queue=380, aggrutil=34.27%
sdm: ios=0/29575, merge=0/0, ticks=0/33411, in_queue=540, util=33.42%
sdn: ios=0/29585, merge=0/0, ticks=0/33342, in_queue=540, util=34.27%
sdl: ios=0/29575, merge=0/0, ticks=0/32289, in_queue=60, util=30.98%
每个值都会输出读和写的值,读的值在前。
ios表示所有组执行的I/O数。
merge表示I/O计划程序执行的合并数。
ticks表示磁盘忙的滴答数。
in_queue表示在磁盘队列中花费的总时间。
util表示磁盘利用率。值为100%意味着磁盘一直处于繁忙状态,50%意味着磁盘有一半时间处于空闲状态。
举例
--随机读
--任务名:randread ,直接路径读,直接读取sdb,不经buffer,深度为64,随机读,io引擎为libaio,块大小4k,每线程IO大小为1G,任务数为1,任务运行时间为1000s,设备为sdb
fio -name=randread -direct=1 -iodepth=64 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/sdc
# 随机写 sdc盘作单纯测试用
fio -name=randwrite -direct=1 -iodepth=64 -rw=randwrite -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/sdc
# 顺序读
fio -name=read -direct=1 -iodepth=64 -rw=read -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/sdc
# 顺序写 sdc盘作单纯测试用
fio -name=write -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/sdc
测试报告解析
[root@kickstart ~]# fio -name=randread -direct=1 -iodepth=64 -rw=randread -ioengine=libaio -bs=4k -size=5G -numjobs=1 -runtime=10 -group_reporting -filename=/dev/sdb
randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=115MiB/s,w=0KiB/s][r=29.5k,w=0 IOPS][eta 00m:00s] <<<<
randread: (groupid=0, jobs=1): err= 0: pid=22902: Tue May 31 17:15:38 2022
read: IOPS=29.5k, BW=115MiB/s (121MB/s)(1152MiB/10001msec)
slat (usec): min=18, max=716, avg=31.02, stdev= 8.09
clat (usec): min=204, max=4219, avg=2137.73, stdev=91.47
lat (usec): min=716, max=4322, avg=2169.35, stdev=92.46
clat percentiles (usec):
| 1.00th=[ 2008], 5.00th=[ 2040], 10.00th=[ 2057], 20.00th=[ 2073],
| 30.00th=[ 2089], 40.00th=[ 2114], 50.00th=[ 2114], 60.00th=[ 2147],
| 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2212], 95.00th=[ 2245],
| 99.00th=[ 2474], 99.50th=[ 2638], 99.90th=[ 3032], 99.95th=[ 3195],
| 99.99th=[ 3458]
bw ( KiB/s): min=116000, max=119080, per=100.00%, avg=117930.00, stdev=827.54, samples=20
iops : min=29000, max=29770, avg=29482.45, stdev=206.91, samples=20
lat (usec) : 250=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.67%, 4=99.32%, 10=0.01%
cpu : usr=0.47%, sys=99.52%, ctx=35, majf=0, minf=94
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwt: total=294801,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1152MiB (1208MB), run=10001-10001msec
Disk stats (read/write):
sdb: ios=291783/0, merge=0/0, ticks=45307/0, in_queue=45335, util=99.10%
Jobs: 1 (f=1): [r(1)][100.0%][r=115MiB/s,w=0KiB/s][r=29.5k,w=0 IOPS][eta 00m:00s]
第一组方括号内的字符表示每个线程的当前状态。第一个字符是作业文件中定义的第一个作业,依此类推。可能的值(按典型生命周期顺序)为:
P Thread setup, but not started.
C Thread created.
I Thread initialized, waiting or generating necessary data.
P Thread running pre-reading file(s).
/ Thread is in ramp period.
R Running, doing sequential reads.
r Running, doing random reads.
W Running, doing sequential writes.
w Running, doing random writes.
M Running, doing mixed sequential reads/writes.
m Running, doing mixed random reads/writes.
D Running, doing sequential trims.
d Running, doing random trims.
F Running, currently waiting for fsync(2).
V Running, doing verification of written data.
f Thread finishing.
E Thread exited, not reaped by main thread yet.
- Thread reaped.
X Thread reaped, exited with an error.
K Thread reaped, exited due to signal.
read: IOPS=29.5k, BW=115MiB/s (121MB/s)(1152MiB/10001msec)
read/write/trim 冒号前的字符串显示统计信息所针对的 I/O 方向。IOPS 是每秒执行的平均 I/O。BW 是平均带宽速率
slat (usec): min=18, max=716, avg=31.02, stdev= 8.09
slat 提交延迟 这是提交 I/O 所需的时间。对于同步 I/O,不显示此行,因为板条实际上是完成延迟(因为队列/完成是其中的一个操作)
clat (usec): min=204, max=4219, avg=2137.73, stdev=91.47
clat 完成延迟。 这表示从提交到完成 I/O 段的时间。对于同步 I/O,clat 通常等于(或非常接近)0,因为从提交到完成的时间基本上只是 CPU 时间(I/O 已经完成,请参阅板条说明)。
lat (usec): min=716, max=4322, avg=2169.35, stdev=92.46
lat 总延迟。与 slat 和 clat 的名称相同,这表示从 fio 创建 I/O 单元到完成 I/O 操作的时间。
bw ( KiB/s): min=116000, max=119080, per=100.00%, avg=117930.00, stdev=827.54, samples=20
bw 基于样本的带宽统计。与 xlat 统计信息的名称相同,但也包括采集的样本数(样本数)以及此线程在其组中接收的总聚合带宽的近似百分比(per)。仅当此组中的线程位于同一磁盘上时,最后一个值才真正有用,因为它们随后会争用磁盘访问。
iops : min=29000, max=29770, avg=29482.45, stdev=206.91, samples=20
iops 基于样本的 IOPS 统计信息。
lat (usec) : 250=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.67%, 4=99.32%, 10=0.01%
lat (nsec/usec/msec) I/O 完成延迟的分布。这是从 I/O 离开 fio 到完成的时间。与上面单独的读/写/修剪部分不同,此处和其余部分中的数据适用于报告组的所有 I/O。250=0.04% 表示 0.04% 的 I/O 在 250us 以下完成。500=64.11% 表示 64.11% 的 I/O 需要 250 到 499us 才能完成。
cpu : usr=0.47%, sys=99.52%, ctx=35, majf=0, minf=94
CPU 使用率。 用户和系统时间,以及此线程经历的上下文切换次数,系统和用户时间的使用情况,最后是主要和次要页面错误的数量。CPU 使用率数字是该报告组中作业的平均值,而上下文和故障计数器是相加的。
depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
IO depthsI/O 深度在作业生存期内的分布。
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
在单个提交调用中提交了多少个 I/O 片段。每个条目都表示该金额及以下,直到上一个条目 - 例如,16 = 100%表示我们每次提交呼叫提交9到16个I / O之间的任何地方。请注意,提交分发条目覆盖的范围可能与等效深度分布条目覆盖的范围不同。
issued rwt: total=294801,0,0, short=0,0,0, dropped=0,0,0
发出的读/写/修整请求的数量,以及其中有多少是短的或丢弃的。
latency : target=0, window=0, percentile=100.00%, depth=64
这些值用于延迟目标和相关选项。启用这些选项时,本节将介绍满足指定延迟目标所需的 I/O 深度。
列出每个客户端后,将打印组统计信息。它们将如下所示:
Run status group 0 (all jobs):
READ: bw=20.9MiB/s (21.9MB/s), 10.4MiB/s-10.8MiB/s (10.9MB/s-11.3MB/s), io=64.0MiB (67.1MB), run=2973-3069msec
WRITE: bw=1231KiB/s (1261kB/s), 616KiB/s-621KiB/s (630kB/s-636kB/s), io=64.0MiB (67.1MB), run=52747-53223msec
打印每个数据方向:
bw :此组中线程的聚合带宽,后跟此组中所有线程的最小和最大带宽。 括号外的值是 2 的幂格式,括号内的值是 10 次方格式的等效值。
io :对此组中的所有线程执行的聚合 I/O。格式与 bw 相同。
run :此组中线程的最小和最长运行时。
实战
通常情况下,I/O都是读写混合的,怎么模拟应用I/O呢?
使用blktrace命令记录设备上的I/O,再使用fio重放blktrace记录的I/O。
#跟踪设备上的I/O 期间复制了一份centos7.6.iso 和读取ISO镜像中的包
blktrace /dev/sdb
#将I/O记录转化为二进制文件
blkparse sdb -d sdb.bin
#重放
fio --name=replay --filename=/dev/sdb --direct=1 --read_iolog=sdb.bin
replay: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [M(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]
replay: (groupid=0, jobs=1): err= 0: pid=23493: Tue May 31 21:42:45 2022
read: IOPS=870, BW=171MiB/s (180MB/s)(4825MiB/28183msec)
clat (usec): min=59, max=13441, avg=426.95, stdev=267.83
lat (usec): min=59, max=13442, avg=427.13, stdev=267.83
clat percentiles (usec):
| 1.00th=[ 161], 5.00th=[ 192], 10.00th=[ 239], 20.00th=[ 306],
| 30.00th=[ 379], 40.00th=[ 400], 50.00th=[ 416], 60.00th=[ 433],
| 70.00th=[ 457], 80.00th=[ 490], 90.00th=[ 562], 95.00th=[ 660],
| 99.00th=[ 971], 99.50th=[ 1221], 99.90th=[ 3949], 99.95th=[ 4359],
| 99.99th=[11076]
bw ( KiB/s): min=27608, max=357844, per=100.00%, avg=266981.08, stdev=63130.40, samples=37
iops : min= 136, max= 1766, avg=1325.11, stdev=316.99, samples=37
write: IOPS=573, BW=170MiB/s (178MB/s)(4779MiB/28183msec)
clat (usec): min=47, max=18035, avg=232.20, stdev=344.15
lat (usec): min=48, max=18065, avg=236.51, stdev=345.39
clat percentiles (usec):
| 1.00th=[ 57], 5.00th=[ 68], 10.00th=[ 72], 20.00th=[ 83],
| 30.00th=[ 124], 40.00th=[ 182], 50.00th=[ 262], 60.00th=[ 277],
| 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 351], 95.00th=[ 396],
| 99.00th=[ 570], 99.50th=[ 709], 99.90th=[ 1467], 99.95th=[ 8717],
| 99.99th=[16188]
bw ( KiB/s): min= 3648, max=532424, per=100.00%, avg=271786.28, stdev=93116.18, samples=36
iops : min= 201, max= 1698, avg=896.00, stdev=336.83, samples=36
lat (usec) : 50=0.01%, 100=10.01%, 250=15.25%, 500=63.58%, 750=9.10%
lat (usec) : 1000=1.47%
lat (msec) : 2=0.40%, 4=0.10%, 10=0.06%, 20=0.03%
cpu : usr=8.48%, sys=23.75%, ctx=41002, majf=0, minf=26
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=24523,16175,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=24
Run status group 0 (all jobs):
READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=4825MiB (5059MB), run=28183-28183msec
WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=4779MiB (5011MB), run=28183-28183msec
Disk stats (read/write):
sdb: ios=24523/16134, merge=0/1, ticks=10603/3856, in_queue=14460, util=51.27%
最后通过blktrace与fio结合,取得了模拟应用I/O的测试报告。
脚本功能
使用Fio需要检测存储磁盘性能,但是有几十块磁盘,需要对几十块磁盘进行fio性能测试,每次手工修改太麻烦,如下是脚本遍历循环,具体的可以通过fdisk -l进行得到/dev/sdx磁盘信息,通过while 每次循环读取一个磁盘的路径执行fio,统一写入一个临时文件中,避免人为重复执行操作。
脚本内容
--需求排除系统磁盘disk,并且写入到一个日志文件中;
--排除系统磁盘需要自行check
举例排除/dev/sda,/dev/sdc磁盘
v_disk_list=$(fdisk -l|grep /dev|grep Disk|awk -F' ' '{print $2}'|awk -F':' '{print $1}'|grep -v loop0)
v_check_time_s=10
v_fio_exec_check=$(echo "${v_disk_list}" |while read v_fio_check_job_disk;
do
echo "##################################################################################################"
echo "Next Check" "${v_fio_check_job_disk}" "disk fio 8k"
if [ "${v_fio_check_job_disk}" == '/dev/sda' -o "${v_fio_check_job_disk}" == '/dev/sdc' ]
then
echo "The Disk Skip fio test " "${v_fio_check_job_disk}"
echo "##################################################"
else
fio --name=${v_fio_check_job_disk}_check --bs=8k --ioengine=libaio --iodepth=16 --direct=1 --rw=randread --time_based --runtime=${v_check_time_s} --group_reporting --numjobs=1 --filename=${v_fio_check_job_disk}
fi
done;)
v_date=$(date "+%Y%m%d.%H%M%S")
echo "${v_fio_exec_check}">/tmp/fio_test_$v_date.txt
脚本使用示例
--需求排除系统磁盘disk,并且写入到一个日志文件中;
--排除系统磁盘需要自行check
举例排除/dev/sda,/dev/sdc磁盘
v_disk_list=$(fdisk -l|grep /dev|grep Disk|awk -F' ' '{print $2}'|awk -F':' '{print $1}'|grep -v loop0)
v_check_time_s=10
v_fio_exec_check=$(echo "${v_disk_list}" |while read v_fio_check_job_disk;
do
echo "##################################################################################################"
echo "Next Check" "${v_fio_check_job_disk}" "disk fio 8k"
if [ "${v_fio_check_job_disk}" == '/dev/sda' -o "${v_fio_check_job_disk}" == '/dev/sdc' ]
then
echo "The Disk Skip fio test " "${v_fio_check_job_disk}"
echo "##################################################"
else
fio --name=${v_fio_check_job_disk}_check --bs=8k --ioengine=libaio --iodepth=16 --direct=1 --rw=randread --time_based --runtime=${v_check_time_s} --group_reporting --numjobs=1 --filename=${v_fio_check_job_disk}
fi
done;)
v_date=$(date "+%Y%m%d.%H%M%S")
echo "${v_fio_exec_check}">/tmp/fio_test_$v_date.txt
[root@oel7n01 ~]# cat /tmp/fio_test_20220823.113657.txt
##################################################################################################
Next Check /dev/sda disk fio 8k
The Disk Skip fio test /dev/sda
##################################################
##################################################################################################
Next Check /dev/sdg disk fio 8k
/dev/sdg_check: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=16
fio-2.20
Starting 1 process
/dev/sdg_check: (groupid=0, jobs=1): err= 0: pid=41308: Tue Aug 23 11:36:04 2022
read: IOPS=233, BW=1869KiB/s (1914kB/s)(18.5MiB/10118msec)
slat (usec): min=17, max=529, avg=48.52, stdev=31.26
clat (usec): min=86, max=292048, avg=68371.57, stdev=78271.46
lat (usec): min=152, max=292114, avg=68420.62, stdev=78294.14
clat percentiles (usec):
| 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 350],
| 30.00th=[ 366], 40.00th=[ 390], 50.00th=[ 516], 60.00th=[85504],
| 70.00th=[122368], 80.00th=[150528], 90.00th=[185344], 95.00th=[207872],
| 99.00th=[248832], 99.50th=[261120], 99.90th=[280576], 99.95th=[280576],
| 99.99th=[292864]
bw ( KiB/s): min= 848, max=19728, per=0.10%, avg=1878.40, stdev=4201.71
lat (usec) : 100=0.08%, 250=0.04%, 500=49.62%, 750=0.47%
lat (msec) : 20=0.21%, 50=2.54%, 100=10.58%, 250=35.58%, 500=0.89%
cpu : usr=0.00%, sys=1.51%, ctx=1225, majf=0, minf=42
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=2364,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=1869KiB/s (1914kB/s), 1869KiB/s-1869KiB/s (1914kB/s-1914kB/s), io=18.5MiB (19.4MB), run=10118-10118msec
Disk stats (read/write):
sdg: ios=2356/13, merge=0/0, ticks=156907/2, in_queue=156269, util=12.32%
##################################################################################################
Next Check /dev/sdh disk fio 8k
/dev/sdh_check: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=16
fio-2.20
Starting 1 process
/dev/sdh_check: (groupid=0, jobs=1): err= 0: pid=41334: Tue Aug 23 11:36:14 2022
read: IOPS=226, BW=1816KiB/s (1859kB/s)(17.0MiB/10142msec)
slat (usec): min=18, max=284, avg=48.60, stdev=29.41
clat (usec): min=36, max=308599, avg=70377.62, stdev=80566.35
lat (usec): min=148, max=308670, avg=70426.66, stdev=80588.25
clat percentiles (usec):
| 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 346], 20.00th=[ 362],
| 30.00th=[ 382], 40.00th=[ 410], 50.00th=[13760], 60.00th=[90624],
| 70.00th=[125440], 80.00th=[154624], 90.00th=[189440], 95.00th=[216064],
| 99.00th=[250880], 99.50th=[264192], 99.90th=[292864], 99.95th=[296960],
| 99.99th=[309248]
bw ( KiB/s): min= 800, max=19174, per=0.10%, avg=1830.65, stdev=4082.58
lat (usec) : 50=0.04%, 500=48.09%, 750=1.74%
lat (msec) : 10=0.09%, 20=0.35%, 50=2.91%, 100=9.56%, 250=36.06%
lat (msec) : 500=1.17%
cpu : usr=0.00%, sys=1.46%, ctx=1194, majf=0, minf=42
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.3%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=2302,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=1816KiB/s (1859kB/s), 1816KiB/s-1816KiB/s (1859kB/s-1859kB/s), io=17.0MiB (18.9MB), run=10142-10142msec
Disk stats (read/write):
sdh: ios=2322/13, merge=0/0, ticks=160561/2, in_queue=159912, util=12.04%
##################################################################################################
Next Check /dev/sdb disk fio 8k
/dev/sdb_check: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=16
fio-2.20
Starting 1 process
/dev/sdb_check: (groupid=0, jobs=1): err= 0: pid=41385: Tue Aug 23 11:36:25 2022
read: IOPS=419, BW=3354KiB/s (3435kB/s)(33.4MiB/10187msec)
slat (usec): min=17, max=880, avg=36.13, stdev=31.42
clat (usec): min=78, max=426697, avg=38085.42, stdev=71247.98
lat (usec): min=165, max=426771, avg=38121.87, stdev=71269.01
clat percentiles (usec):
| 1.00th=[ 330], 5.00th=[ 342], 10.00th=[ 346], 20.00th=[ 354],
| 30.00th=[ 362], 40.00th=[ 366], 50.00th=[ 378], 60.00th=[ 390],
| 70.00th=[ 426], 80.00th=[100864], 90.00th=[164864], 95.00th=[201728],
| 99.00th=[250880], 99.50th=[268288], 99.90th=[301056], 99.95th=[329728],
| 99.99th=[428032]
bw ( KiB/s): min= 768, max=51808, per=0.10%, avg=3404.00, stdev=11393.17
lat (usec) : 100=0.02%, 500=73.96%, 750=0.73%, 1000=0.02%
lat (msec) : 10=0.02%, 20=0.07%, 50=0.98%, 100=4.12%, 250=18.97%
lat (msec) : 500=1.10%
cpu : usr=0.00%, sys=1.86%, ctx=1114, majf=0, minf=43
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=99.6%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=4271,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=3354KiB/s (3435kB/s), 3354KiB/s-3354KiB/s (3435kB/s-3435kB/s), io=33.4MiB (34.0MB), run=10187-10187msec
Disk stats (read/write):
sdb: ios=4266/4, merge=0/0, ticks=159624/1, in_queue=158914, util=11.53%
##################################################################################################
Next Check /dev/sdc disk fio 8k
The Disk Skip fio test /dev/sdc
##################################################
##################################################################################################
Next Check /dev/sdd disk fio 8k
/dev/sdd_check: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=16
fio-2.20
Starting 1 process
/dev/sdd_check: (groupid=0, jobs=1): err= 0: pid=41409: Tue Aug 23 11:36:35 2022
read: IOPS=173, BW=1390KiB/s (1424kB/s)(13.8MiB/10179msec)
slat (usec): min=18, max=797, avg=52.08, stdev=40.59
clat (usec): min=84, max=397887, avg=91901.93, stdev=105892.11
lat (usec): min=154, max=397984, avg=91954.48, stdev=105917.99
clat percentiles (usec):
| 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 342], 20.00th=[ 366],
| 30.00th=[ 382], 40.00th=[ 402], 50.00th=[ 564], 60.00th=[118272],
| 70.00th=[164864], 80.00th=[201728], 90.00th=[248832], 95.00th=[284672],
| 99.00th=[337920], 99.50th=[350208], 99.90th=[387072], 99.95th=[399360],
| 99.99th=[399360]
bw ( KiB/s): min= 640, max=14832, per=0.10%, avg=1402.25, stdev=3161.19
lat (usec) : 100=0.06%, 500=49.41%, 750=0.96%
lat (msec) : 20=0.17%, 50=1.36%, 100=5.09%, 250=33.30%, 500=9.67%
cpu : usr=0.01%, sys=1.19%, ctx=910, majf=0, minf=43
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=99.2%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=1769,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=1390KiB/s (1424kB/s), 1390KiB/s-1390KiB/s (1424kB/s-1424kB/s), io=13.8MiB (14.5MB), run=10179-10179msec
Disk stats (read/write):
sdd: ios=1765/3, merge=0/0, ticks=160442/0, in_queue=159958, util=9.05%
##################################################################################################
Next Check /dev/sde disk fio 8k
/dev/sde_check: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=16
fio-2.20
Starting 1 process
/dev/sde_check: (groupid=0, jobs=1): err= 0: pid=41428: Tue Aug 23 11:36:47 2022
read: IOPS=140, BW=1128KiB/s (1155kB/s)(12.4MiB/11286msec)
slat (usec): min=17, max=686, avg=51.92, stdev=41.53
clat (usec): min=68, max=1578.1k, avg=113313.57, stdev=180279.70
lat (usec): min=130, max=1578.2k, avg=113365.95, stdev=180296.76
clat percentiles (usec):
| 1.00th=[ 290], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 346],
| 30.00th=[ 358], 40.00th=[ 378], 50.00th=[ 494], 60.00th=[128512],
| 70.00th=[179200], 80.00th=[224256], 90.00th=[276480], 95.00th=[321536],
| 99.00th=[1204224], 99.50th=[1499136], 99.90th=[1564672], 99.95th=[1581056],
| 99.99th=[1581056]
bw ( KiB/s): min= 496, max=13344, per=0.11%, avg=1260.00, stdev=2844.78
lat (usec) : 100=0.31%, 250=0.06%, 500=49.91%, 750=0.06%
lat (msec) : 50=1.19%, 100=4.53%, 250=29.42%, 500=13.32%, 750=0.19%
lat (msec) : 2000=1.01%
cpu : usr=0.00%, sys=0.96%, ctx=821, majf=0, minf=42
IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=99.1%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=1591,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=1128KiB/s (1155kB/s), 1128KiB/s-1128KiB/s (1155kB/s-1155kB/s), io=12.4MiB (13.0MB), run=11286-11286msec
Disk stats (read/write):
sde: ios=1616/7, merge=0/0, ticks=161798/253, in_queue=161627, util=7.41%
##################################################################################################
Next Check /dev/sdf disk fio 8k
/dev/sdf_check: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=16
fio-2.20
Starting 1 process
/dev/sdf_check: (groupid=0, jobs=1): err= 0: pid=41450: Tue Aug 23 11:36:57 2022
read: IOPS=168, BW=1346KiB/s (1378kB/s)(13.4MiB/10190msec)
slat (usec): min=19, max=188940, avg=165.87, stdev=4562.51
clat (usec): min=84, max=367328, avg=94839.08, stdev=95832.14
lat (usec): min=156, max=367388, avg=95005.44, stdev=95869.11
clat percentiles (usec):
| 1.00th=[ 418], 5.00th=[ 438], 10.00th=[ 446], 20.00th=[ 466],
| 30.00th=[ 498], 40.00th=[ 524], 50.00th=[88576], 60.00th=[166912],
| 70.00th=[179200], 80.00th=[191488], 90.00th=[205824], 95.00th=[216064],
| 99.00th=[261120], 99.50th=[317440], 99.90th=[358400], 99.95th=[366592],
| 99.99th=[366592]
bw ( KiB/s): min= 576, max=14112, per=0.10%, avg=1358.50, stdev=3002.35
lat (usec) : 100=0.06%, 250=0.06%, 500=31.16%, 750=17.97%
lat (msec) : 20=0.06%, 50=0.23%, 100=0.58%, 250=48.72%, 500=1.17%
cpu : usr=0.00%, sys=3.01%, ctx=901, majf=0, minf=42
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=99.1%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=1714,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=1346KiB/s (1378kB/s), 1346KiB/s-1346KiB/s (1378kB/s-1378kB/s), io=13.4MiB (14.0MB), run=10190-10190msec
Disk stats (read/write):
sdf: ios=1729/13, merge=0/0, ticks=160198/1, in_queue=159705, util=9.17%