一.fio安装
1.下载fio
fio 官网地址:http://freshmeat.net/projects/fio/
wget http://brick.kernel.dk/snaps/fio-2.0.7.tar.gz
2.确认ceph-devel-compat是否安装
[root@node-2 fio]# rpm -qa | grep ceph-devel-compat
ceph-devel-compat-0.94.6-1.el7.centos.x86_64
如果未安装,先安装ceph-devel-compat,安装需要java环境,如果安装不成功,先安装jdk
yum install ceph-devel-compat
3.安装jdk
下载安装jdk包
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.rpm
安装jdk rpm -ivh jdk-8u91-linux-x64.rpm
确认java安装成功 [root@node-2 fio]# java -version
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
待java安装成功后去安装ceph-devel-compat,安装完ceph-devel-compat就可以安装fio了
4.解压并安装fio
tar -zxvf fio-2.0.7.tar.gz
cd fio-2.0.7
./configure(此处需确认Rados Block Device engine 状态为yes)
make
make install
二.fio使用
1.创建一块2G的测试磁盘
rbd create -p rbd_pool test --size 2048
2.创建fio配置文件 vim rbdttest.fio,内容如下(参数说明见常见最后参数说明)
######################################################################
# Example test for the RBD engine
#
# Runs a 4k random write test against a RBD via librbd
#
# NOTE: Make sure you have either a RBD named 'fio_test' or change
# the rbdname parameter.
######################################################################
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_log
numjobs=8
iodepth=32
runtime=30
group_reporting
[perftest]
ioengine=rbd
clientname=admin
pool=rbd_pool
rbdname=test
rw=randwrite
bs=4k
direct=1
size=2G
3.测试
./fiorbdttest.fio
输出结果, iops在5000以上算是正常值
perftest: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32
...
fio-2.6
Starting 8 processes
rbd engine: RBD version: 0.1.9
rbd engine: RBD version: 0.1.9
rbd engine: RBD version: 0.1.9
rbd engine: RBD version: 0.1.9
rbd engine: RBD version: 0.1.9
rbd engine: RBD version: 0.1.9
rbd engine: RBD version: 0.1.9
rbd engine: RBD version: 0.1.9
Jobs: 8 (f=8): [w(8)] [100.0% done] [0KB/9614KB/0KB /s] [0/2403/0 iops] [eta 00m:00s]
perftest: (groupid=0, jobs=8): err= 0: pid=4872: Fri Jul 1 20:08:39 2016
write: io=671348KB, bw=22256KB/s, iops=5563, runt= 30165msec
slat (usec): min=0, max=67, avg= 1.33, stdev= 2.25
clat (msec): min=4, max=1074, avg=45.95, stdev=57.63
lat (msec): min=4, max=1074, avg=45.95, stdev=57.63
clat percentiles (msec):
| 1.00th=[ 11], 5.00th=[ 19], 10.00th=[ 22], 20.00th=[ 25],
| 30.00th=[ 28], 40.00th=[ 31], 50.00th=[ 34], 60.00th=[ 38],
| 70.00th=[ 43], 80.00th=[ 51], 90.00th=[ 72], 95.00th=[ 115],
| 99.00th=[ 243], 99.50th=[ 355], 99.90th=[ 906], 99.95th=[ 906],
| 99.99th=[ 930]
bw (KB /s): min= 133, max= 4470, per=12.97%, avg=2887.57, stdev=1040.03
lat (msec) : 10=0.81%, 20=6.80%, 50=71.80%, 100=14.52%, 250=5.19% lat (msec) : 500=0.43%, 750=0.34%, 1000=0.12%, 2000=0.01%
cpu : usr=0.20%, sys=0.02%, ctx=11653, majf=0, minf=831
IO depths : 1=1.4%, 2=3.5%, 4=8.6%, 8=23.1%, 16=59.4%, 32=4.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=96.3%, 8=0.1%, 16=0.4%, 32=3.2%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=167837/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=671348KB, aggrb=22255KB/s, minb=22255KB/s, maxb=22255KB/s, mint=30165msec, maxt=30165msec
Disk stats (read/write):
dm-0: ios=30/2818, merge=0/0, ticks=5/1833, in_queue=1838, util=0.12%, aggrios=32/1979, aggrmerge=0/965, aggrticks=5/1799, aggrin_queue=1802, aggrutil=0.11%
sda: ios=32/1979, merge=0/965, ticks=5/1799, in_queue=1802, util=0.11%
常用参数说明
bsrange=512-2048 //数据块的大小范围,从512bytes到2048 bytes
ioengine=libaio //指定io引擎
userspace_reap //配合libaio,提高异步io的收割速度
rw=randrw //混合随机对写io,默认读写比例5:5
rwmixwrite=20 //在混合读写的模式下,写占20%
time_based //在runtime压力测试周期内,如果规定数据量测试完,要重复测试
runtime=180 //在180秒,压力测试将终止
direct=1 //设置非缓冲io
group_reporting //如果设置了多任务参数numjobs,用每组报告代替每job报告
randrepeat=0 //设置产生的随机数是不可重复的
norandommap
ramp_time=6
iodepth=16
iodepth_batch=8
iodepth_low=8
iodepth_batch_complete=8
exitall //一个job完成,就停止所有的 filename=/dev/mapper/cachedev //压力测试的文件名
numjobs=1 //job的默认数量,也就是并发数,默认是1
size=200G //这job总共的io大小
refill_buffers //每次提交后都重复填充io buffer
overwrite=1 //设置文件可覆盖
sync=1 //设置异步io
fsync=1 //一个io就同步数据
invalidate=1 //开始io之前就失效buffer-cache
directory=/your_dir // fielname参数值的前缀
thinktime=600 //在发布io前等待600秒
thinktime_spin=200 //消费cpu的时间,thinktime的剩余时间sleep
thinktime_blocks=2 //在thinktime之前发布的block数量
bssplit=4k/30:8k/40:16k/30 //随机读4k文件占30%、8k占40%、16k占30%
rwmixread=70 //读占70%
4. 通过随机读来提高IO负载命令
fio --name=test --rw=read --bs=1024k --runtime=1800 --ioengine=libaio --iodepth=16 --numjobs=1 --filename=/dev/sda --direct=1 --group_reporting --time_based=1 --group_reporting --eta-newline 1