FIO测试工具介绍

http://blog.csdn.net/lidan3959/article/details/9945443

 

FIO是测试IOPS的非常好的工具,用来对硬件进行压力测试和验证,支持13种不同的I/O引擎,包括:sync,mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio 等等。

随着块设备的发展,特别是SSD盘的出现,设备的并行度越来越高。要想利用好这些设备,有个诀窍就是提高设备的iodepth, 一次喂给设备更多的IO请求,让电梯算法和设备有机会来安排合并以及内部并行处理,提高总体效率。

应用程序使用IO通常有二种方式:同步和异步。 同步的IO一次只能发出一个IO请求,等待内核完成才返回,这样对于单个线程iodepth总是小于1,但是可以通过多个线程并发执行来解决,通常我们会用16-32个线程同时工作把iodepth塞满。 异步的话就是用类似libaio这样的linux native aio一次提交一批,然后等待一批的完成,减少交互的次数,会更有效率。

io队列深度通常对不同的设备很敏感,那么如何用fio来探测出合理的值呢?在fio的帮助文档里是如何解释iodepth相关参数的

iodepth=int
iodepth_batch=int
iodepth_batch_complete=int
iodepth_low=int
fsync=int
direct=bool

这几个参数在libaio的引擎下的作用,会用iodepth值来调用io_setup准备一个可以一次提交iodepth个IO的上下文,同时申请一个io请求队列用于保持IO。 在压测进行的时候,系统会生成特定的IO请求,往io请求队列里面扔,当队列里面的IO数量达到iodepth_batch值的时候,就调用io_submit批次提交请求,然后开始调用io_getevents开始收割已经完成的IO。 每次收割多少呢?由于收割的时候,超时时间设置为0,所以有多少已完成就算多少,最多可以收割iodepth_batch_complete值个。随着收割,IO队列里面的IO数就少了,那么需要补充新的IO。 什么时候补充呢?当IO数目降到iodepth_low值的时候,就重新填充,保证OS可以看到至少iodepth_low数目的io在电梯口排队着。


fio 官网地址: http://freshmeat.net/projects/fio/

一,FIO安装
wget http://brick.kernel.dk/snaps/fio-2.0.7.tar.gz
yum install libaio-devel
tar -zxvf fio-2.0.7.tar.gz
cd fio-2.0.7
make
make install

二,FIO用法:

随机读:
fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=200G -numjobs=10 -runtime=1000 -group_reporting -name=mytest

说明:
filename=/dev/sdb1 测试文件名称,通常选择需要测试的盘的data目录。
direct=1 测试过程绕过机器自带的buffer。使测试结果更真实。
rw=randwrite 测试随机写的I/O
rw=randrw 测试随机写和读的I/O
bs=16k 单次io的块文件大小为16k
bsrange=512-2048 同上,提定数据块的大小范围
size=5g 本次的测试文件大小为5g,以每次4k的io进行测试。
numjobs=30 本次的测试线程为30.
runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止。
ioengine=psync io引擎使用pync方式
rwmixwrite=30 在混合读写的模式下,写占30%
group_reporting 关于显示结果的,汇总每个进程的信息。

sync=1 设置异步io
fsync=1 一个io就同步数据

bssplit=4k/30:8k/40:16k/30 随机读4k文件占30%、8k占40%、16k占30%

此外
lockmem=1g 只使用1g内存进行测试。
zero_buffers 用0初始化系统buffer。
nrfiles=8 每个进程生成文件的数量。

[root@bogon fio-2.1.10]# fio --cmdhelp
description : Text job description
name : Name of this job
filename : File(s) to use for the workload
lockfile : Lock file when doing IO to it
directory : Directory to store files in
filename_format : Override default $jobname.$jobnum.$filenum naming
opendir : Recursively add files from this directory and down
rw : IO direction
bs : Block size unit
ba : IO block offset alignment
bsrange : Set block size range (in more detail than bs)
bssplit : Set a specific mix of block sizes
bs_unaligned : Don't sector align IO buffer sizes
randrepeat : Use repeatable random IO pattern
randseed : Set the random generator seed value
use_os_rand : Set to use OS random generator
norandommap : Accept potential duplicate random blocks
ignore_error : Set a specific list of errors to ignore
rw_sequencer : IO offset generator modifier
ioengine : IO engine to use
iodepth : Number of IO buffers to keep in flight
iodepth_batch : Number of IO buffers to submit in one go
iodepth_batch_complete: Number of IO buffers to retrieve in one go
iodepth_low : Low water mark for queuing depth
size : Total size of device or files
io_limit : (null)
fill_device : Write until an ENOSPC error occurs
filesize : Size of individual files
file_append : IO will start at the end of the file(s)
offset : Start IO from this offset
offset_increment : What is the increment from one offset to the next
number_ios : Force job completion of this number of IOs
random_generator : Type of random number generator to use
random_distribution : Random offset distribution generator
percentage_random : Percentage of seq/random mix that should be random
allrandrepeat : Use repeatable random numbers for everything
nrfiles : Split job workload between this number of files
file_service_type : How to select which file to service next
openfiles : Number of files to keep open at the same time
fallocate : Whether pre-allocation is performed when laying out files
fadvise_hint : Use fadvise() to advise the kernel on IO pattern
fsync : Issue fsync for writes every given number of blocks
fdatasync : Issue fdatasync for writes every given number of blocks
write_barrier : Make every Nth write a barrier write
sync_file_range : Use sync_file_range()
direct : Use O_DIRECT IO (negates buffered)
atomic : Use Atomic IO with O_DIRECT (implies O_DIRECT)
buffered : Use buffered IO (negates direct)
sync : Use O_SYNC for buffered writes
overwrite : When writing, set whether to overwrite current data
loops : Number of times to run the job
numjobs : Duplicate this job this many times
startdelay : Only start job when this period has passed
runtime : Stop workload when this amount of time has passed
time_based : Keep running until runtime/timeout is met
verify_only : Verifies previously written data is still valid
ramp_time : Ramp up time before measuring performance
clocksource : What type of timing source to use
mem : Backing type for IO buffers
verify : Verify data written
do_verify : Run verification stage after write
verifysort : Sort written verify blocks for read back
verifysort_nr : Pre-load and sort verify blocks for a read workload
verify_interval : Store verify buffer header every N bytes
verify_offset : Offset verify header location by N bytes
verify_pattern : Fill pattern for IO buffers
verify_fatal : Exit on a single verify failure, don't continue
verify_dump : Dump contents of good and bad blocks on failure
verify_async : Number of async verifier threads to use
verify_backlog : Verify after this number of blocks are written
verify_backlog_batch : Verify this number of IO blocks
trim_percentage : Number of verify blocks to discard/trim
trim_verify_zero : Verify that trim/discarded blocks are returned as zeroes
trim_backlog : Trim after this number of blocks are written
trim_backlog_batch : Trim this number of IO blocks
experimental_verify : Enable experimental verification
write_iolog : Store IO pattern to file
read_iolog : Playback IO pattern from file
replay_no_stall : Playback IO pattern file as fast as possible without stalls
replay_redirect : Replay all I/O onto this device, regardless of trace device
exec_prerun : Execute this file prior to running job
exec_postrun : Execute this file after running job
ioscheduler : Use this IO scheduler on the backing device
zonesize : Amount of data to read per zone
zonerange : Give size of an IO zone
zoneskip : Space between IO zones
lockmem : Lock down this amount of memory (per worker)
rwmixread : Percentage of mixed workload that is reads
rwmixwrite : Percentage of mixed workload that is writes
nice : Set job CPU nice value
prio : Set job IO priority value
prioclass : Set job IO priority class
thinktime : Idle time between IO buffers (usec)
thinktime_spin : Start think time by spinning this amount (usec)
thinktime_blocks : IO buffer period between 'thinktime'
rate : Set bandwidth rate
ratemin : Job must meet this rate or it will be shutdown
ratecycle : Window average for rate limits (msec)
rate_iops : Limit IO used to this number of IO operations/sec
rate_iops_min : Job must meet this rate or it will be shut down
max_latency : Maximum tolerated IO latency (usec)
latency_target : Ramp to max queue depth supporting this latency
latency_window : Time to sustain latency_target
latency_percentile : Percentile of IOs must be below latency_target
invalidate : Invalidate buffer/page cache prior to running job
create_serialize : Serialize creating of job files
create_fsync : fsync file after creation
create_on_open : Create files when they are opened for IO
create_only : Only perform file creation phase
pre_read : Pre-read files before starting official testing
cpumask : CPU affinity mask
cpus_allowed : Set CPUs allowed
cpus_allowed_policy : Distribution policy for cpus_allowed
end_fsync : Include fsync at the end of job
fsync_on_close : fsync files on close
unlink : Unlink created files after job has completed
exitall : Terminate all jobs when one exits
stonewall : Insert a hard barrier between this job and previous
new_group : Mark the start of a new group (for reporting)
thread : Use threads instead of processes
write_bw_log : Write log of bandwidth during run
bwavgtime : Time window over which to calculate bandwidth (msec)
write_lat_log : Write log of latency during run
write_iops_log : Write log of IOPS during run
iopsavgtime : Time window over which to calculate IOPS (msec)
log_avg_msec : Average bw/iops/lat logs over this period of time
group_reporting : Do reporting on a per-group basis
zero_buffers : Init IO buffers to all zeroes
refill_buffers : Refill IO buffers on every IO submit
scramble_buffers : Slightly scramble buffers on every IO submit
buffer_pattern : Fill pattern for IO buffers
buffer_compress_percentage: How compressible the buffer is (approximately)
buffer_compress_chunk : Size of compressible region in buffer
clat_percentiles : Enable the reporting of completion latency percentiles
percentile_list : Specify a custom list of percentiles to report
disk_util : Log disk utilization statistics
gtod_reduce : Greatly reduce number of gettimeofday() calls
disable_lat : Disable latency numbers
disable_clat : Disable completion latency numbers
disable_slat : Disable submission latency numbers
disable_bw_measurement: Disable bandwidth logging
gtod_cpu : Set up dedicated gettimeofday() thread on this CPU
unified_rw_reporting : Unify reporting across data direction
continue_on_error : Continue on non-fatal errors during IO
error_dump : Dump info on each error
profile : Select a specific builtin performance test
cgroup : Add job to cgroup of this name
cgroup_nodelete : Do not delete cgroups after job completion
cgroup_weight : Use given weight for cgroup
uid : Run job with this user ID
gid : Run job with this group ID
kb_base : How many bytes per KB for reporting (1000 or 1024)
unit_base : Bit multiple of result summary data (8 for byte, 1 for bit)
hugepage-size : When using hugepages, specify size of each page
flow_id : The flow index ID to use
flow : Weight for flow control of this job
flow_watermark : High watermark for flow control. This option should be set to the same value for all threads with non-zero flow.
flow_sleep : How many microseconds to sleep after being held back by the flow control mechanism

顺序读:
fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest

随机写:
fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest

顺序写:
fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest

混合随机读写:
fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop

三,实际测试范例:

[root@localhost ~]# fio -filename=/dev/sdb1 -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=100 -group_reporting -name=mytest1
mytest1: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1

mytest1: (g=0): rw=randrw, bs=16K-16K/16K-16K, ioengine=psync, iodepth=1
fio 2.0.7
Starting 30 threads
Jobs: 1 (f=1): [________________m_____________] [3.5% done] [6935K/3116K /s] [423 /190 iops] [eta 48m:20s] s]
mytest1: (groupid=0, jobs=30): err= 0: pid=23802
read : io=1853.4MB, bw=18967KB/s, iops=1185 , runt=100058msec
clat (usec): min=60 , max=871116 , avg=25227.91, stdev=31653.46
lat (usec): min=60 , max=871117 , avg=25228.08, stdev=31653.46
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8],
| 30.00th=[ 10], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 19],
| 70.00th=[ 26], 80.00th=[ 37], 90.00th=[ 57], 95.00th=[ 79],
| 99.00th=[ 151], 99.50th=[ 202], 99.90th=[ 338], 99.95th=[ 383],
| 99.99th=[ 523]
bw (KB/s) : min= 26, max= 1944, per=3.36%, avg=636.84, stdev=189.15
write: io=803600KB, bw=8031.4KB/s, iops=501 , runt=100058msec
clat (usec): min=52 , max=9302 , avg=146.25, stdev=299.17
lat (usec): min=52 , max=9303 , avg=147.19, stdev=299.17
clat percentiles (usec):
| 1.00th=[ 62], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 74],
| 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90],
| 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 120], 95.00th=[ 370],
| 99.00th=[ 1688], 99.50th=[ 2128], 99.90th=[ 3088], 99.95th=[ 3696],
| 99.99th=[ 5216]
bw (KB/s) : min= 20, max= 1117, per=3.37%, avg=270.27, stdev=133.27
lat (usec) : 100=24.32%, 250=3.83%, 500=0.33%, 750=0.28%, 1000=0.27%
lat (msec) : 2=0.64%, 4=3.08%, 10=20.67%, 20=19.90%, 50=17.91%
lat (msec) : 100=6.87%, 250=1.70%, 500=0.19%, 750=0.01%, 1000=0.01%
cpu : usr=1.70%, sys=2.41%, ctx=5237835, majf=0, minf=6344162
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=118612/w=50225/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
READ: io=1853.4MB, aggrb=18966KB/s, minb=18966KB/s, maxb=18966KB/s, mint=100058msec, maxt=100058msec
WRITE: io=803600KB, aggrb=8031KB/s, minb=8031KB/s, maxb=8031KB/s, mint=100058msec, maxt=100058msec

Disk stats (read/write):
sdb: ios=118610/50224, merge=0/0, ticks=2991317/6860, in_queue=2998169, util=99.77%

主要查看以上红色字体部分的iops

from:http://ns.35.com/?p=227

参考:http://blog.csdn.net/wyzxg/article/details/7454072

  • 0
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 要下载 Linux FIO 测试工具,可以按照以下步骤进行操作: 1. 打开终端,进入命令行界面。 2. 确保已经安装了 wget 命令行工具。如果未安装,可以使用包管理器(如 apt、dnf、yum 等)来安装这个工具。例如,在 Ubuntu 系统上,可以使用以下命令进行安装:sudo apt-get install wget 3. 使用 wget 命令下载 FIO 工具的安装包。在命令行中输入以下命令:wget https://github.com/axboe/fio/archive/master.zip 4. 下载完成后,使用 unzip 命令解压缩下载的压缩包。在命令行中输入以下命令:unzip master.zip 5. 解压缩完成后,进入解压后的目录。在命令行中输入以下命令:cd fio-master 6. 编译 FIO 工具。在命令行中输入以下命令:make 7. 等待编译完成后,可以使用 FIO 工具进行测试。 以上就是下载和安装 Linux 下的 FIO 测试工具的步骤。请注意,这些步骤可能因不同的 Linux 发行版而略有不同,但整体过程类似。建议在下载和安装前,查看一下官方文档或相关资源,了解适用于你的具体环境的下载和安装方法。 ### 回答2: 要下载Linux Fio测试工具,你可以按照以下步骤进行: 1. 打开终端,使用curl命令下载Fio源代码。在终端中输入以下命令并按下回车键: ``` curl -O https://github.com/axboe/fio/archive/fio-x.x.x.tar.gz ``` 请注意,"x.x.x"应替换为最新版本的Fio。 2. 解压下载的源代码。在终端中输入以下命令并按下回车键: ``` tar -zxvf fio-x.x.x.tar.gz ``` 这将解压源代码文件到当前目录中。 3. 进入解压后的目录。在终端中输入以下命令并按下回车键: ``` cd fio-fio-x.x.x ``` 进入包含Fio源代码的文件夹。 4. 构建和安装Fio。在终端中输入以下命令并按下回车键: ``` make && sudo make install ``` 这将编译并安装Fio工具。 5. 完成安装后,你就可以使用Fio进行磁盘性能测试了。可以通过在终端中运行"Fio"命令来查看可用的参数和选项。 请注意,下载和安装Fio需要确保你的计算机已经正确安装了curl和构建工具(例如gcc)。根据你的Linux发行版,你可能需要提前安装这些依赖项。 ### 回答3: 要下载Linux下的fio测试工具,可以按照以下步骤进行操作: 1. 打开终端,使用wget命令从fio官方网站下载fio的源代码压缩包。可以使用以下命令进行下载: wget https://github.com/axboe/fio/archive/refs/tags/fio-X.YY.tar.gz 注意,X.YY需要替换成所需的版本号。 2. 下载完成后,可以使用tar命令解压缩下载的文件。使用以下命令进行解压缩: tar -zxvf fio-X.YY.tar.gz 解压后将会得到一个名为fio-X.YY的文件夹。 3. 进入解压后的文件夹,使用make命令进行编译。使用以下命令进行编译: cd fio-X.YY make 编译完成后,将会生成一个可执行文件fio。 4. 将fio可执行文件移动到系统的可执行文件路径下,这样就可以在任何位置使用fio命令。使用以下命令进行移动: sudo mv fio /usr/local/bin/ 这里需要使用sudo命令以管理员权限进行操作。 5. 完成以上步骤后,即可在终端中使用fio命令进行性能测试,例如: fio --name=mytest --ioengine=libaio --rw=read --bs=4k --numjobs=4 --size=1G --runtime=60s 这个命令将会进行一个简单的读取测试,读取4KB大小的数据,使用4个并发作业,总共测试1GB的数据,在60秒的时间内完成。 通过以上步骤,就可以成功下载并使用Linux下的fio测试工具进行性能测试了。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值