【SysBench】深度优化文件 I/O

上一篇对 sysbench fileio 进行了一定的测试优化,得到的结论与预期不符,本文将尝试查找问题以及进一步优化。


1、修改 sysbench 的参数

在准备阶段,修改创建测试文件的参数 file-extra-flags 为其他值,然后再修改其他参数进行测试。

1.1 dsync

1.1.1 准备文件

$ sysbench --threads=2 fileio --file-total-size=10G --file-num=2 --file-block-size=16K prepare --file-extra-flags=dsync
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

2 files, 5242880Kb each, 10240Mb total
Creating files for the test...
Extra file open flags: dsync 
Creating file test_file.0
Creating file test_file.1
10737418240 bytes written in 327.77 seconds (31.24 MiB/sec).

1.1.2 运行测试

$ sysbench --time=300 --threads=2 fileio --file-total-size=10G --file-num=2 --file-block-size=16K --file-test-mode=rndrw run --file-fsync-freq=1 --file-fsync-mode=fdatasync --file-extra-flags=dsync
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 2
Initializing random number generator from current time


Extra file open flags: dsync 
2 files, 5GiB each
10GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      74.82
    writes/s:                     49.87
    fsyncs/s:                     99.76

Throughput:
    read, MiB/s:                  1.17
    written, MiB/s:               0.78

General statistics:
    total time:                          300.1178s
    total number of events:              67358

Latency (ms):
         min:                                    0.00
         avg:                                    8.91
         max:                                  955.28
         95th percentile:                       33.12
         sum:                               600004.59

Threads fairness:
    events (avg/stddev):           33679.0000/263.00
    execution time (avg/stddev):   300.0023/0.06
  • total number of events: 67358
  • avg: 8.91
  • 95th percentile: 33.12

对比 3T2,总请求数显著增加,平均响应时间显著减少,95%响应时间略有减少。性能显著提升。

1.1.3 清理文件

$ sysbench fileio cleanup

1.2 direct

使用 Direct I/O 进行测试。对应 MySQL 的 O_DIRECT

1.2.1 准备文件

$ sysbench --threads=2 fileio --file-total-size=10G --file-num=2 --file-block-size=16K prepare --file-extra-flags=direct

1.2.2 运行测试

$ sysbench --time=300 --threads=2 fileio --file-total-size=10G --file-num=2 --file-block-size=16K --file-test-mode=rndrw run --file-fsync-freq=1 --file-fsync-mode=fdatasync --file-extra-flags=direct
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 2
Initializing random number generator from current time


Extra file open flags: directio
2 files, 5GiB each
10GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      82.40
    writes/s:                     54.93
    fsyncs/s:                     109.88

Throughput:
    read, MiB/s:                  1.29
    written, MiB/s:               0.86

General statistics:
    total time:                          300.0174s
    total number of events:              74166

Latency (ms):
         min:                                    0.00
         avg:                                    8.09
         max:                                  882.69
         95th percentile:                       29.19
         sum:                               599909.27

Threads fairness:
    events (avg/stddev):           37083.0000/215.00
    execution time (avg/stddev):   299.9546/0.00
  • total number of events: 74166
  • avg: 8.09
  • 95th percentile: 29.19

对比 1.1,总请求数显著增加,平均响应时间略有减少,95%响应时间显著减少。性能显著提升了

1.3 direct + async

继续使用 1.2 中创建的测试文件进行测试。

$ sysbench --time=300 --threads=2 fileio --file-total-size=10G --file-num=2 --file-block-size=16K --file-test-mode=rndrw run --file-fsync-freq=1 --file-fsync-mode=fdatasync --file-extra-flags=direct --file-io-mode=async
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 2
Initializing random number generator from current time


Extra file open flags: directio
2 files, 5GiB each
10GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using asynchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      259.88
    writes/s:                     172.96
    fsyncs/s:                     345.93

Throughput:
    read, MiB/s:                  4.06
    written, MiB/s:               2.70

General statistics:
    total time:                          300.5423s
    total number of events:              234052

Latency (ms):
         min:                                    0.00
         avg:                                    2.56
         max:                                  910.07
         95th percentile:                        0.04
         sum:                               599958.35

Threads fairness:
    events (avg/stddev):           117026.0000/940.00
    execution time (avg/stddev):   299.9792/0.00
  • total number of events: 234052
  • avg: 2.56
  • 95th percentile: 0.04

对比 1.2,总请求数增加了 2 倍,平均响应时间减少为 2/3 ,%95响应时间减少为前者的近 1/730 。性能达到质的提升。

1.4 direct + async + async-backlog

1.4.1 增加 async-backlog

$ sysbench --time=300 --threads=2 fileio --file-total-size=10G --file-num=2 --file-block-size=16K --file-test-mode=rndrw run --file-fsync-freq=1 --file-fsync-mode=fdatasync --file-extra-flags=direct --file-io-mode=async --file-async-backlog=256
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 2
Initializing random number generator from current time


Extra file open flags: directio
2 files, 5GiB each
10GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using asynchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      280.63
    writes/s:                     186.99
    fsyncs/s:                     373.99

Throughput:
    read, MiB/s:                  4.38
    written, MiB/s:               2.92

General statistics:
    total time:                          300.4804s
    total number of events:              252881

Latency (ms):
         min:                                    0.00
         avg:                                    2.37
         max:                                 1090.04
         95th percentile:                        0.04
         sum:                               599765.77

Threads fairness:
    events (avg/stddev):           126440.5000/2636.50
    execution time (avg/stddev):   299.8829/0.00
  • total number of events: 252881
  • avg: 2.37
  • max: 1090.04
  • 95th percentile: 0.04

对比 1.3,总请求数显著增加,平均响应时间略有减少,%95响应时间无变化。性能显著提升了。但也应注意最大响应时间随队列长度增加了。

1.4.2 减少 async-backlog

$ sysbench --time=300 --threads=2 fileio --file-total-size=10G --file-num=2 --file-block-size=16K --file-test-mode=rndrw run --file-fsync-freq=1 --file-fsync-mode=fdatasync --file-extra-flags=direct --file-io-mode=async --file-async-backlog=2
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 2
Initializing random number generator from current time


Extra file open flags: directio
2 files, 5GiB each
10GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using asynchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      81.39
    writes/s:                     54.25
    fsyncs/s:                     108.51

Throughput:
    read, MiB/s:                  1.27
    written, MiB/s:               0.85

General statistics:
    total time:                          300.0773s
    total number of events:              73259

Latency (ms):
         min:                                    0.00
         avg:                                    8.19
         max:                                 1257.79
         95th percentile:                       26.20
         sum:                               599988.95

Threads fairness:
    events (avg/stddev):           36629.5000/177.50
    execution time (avg/stddev):   299.9945/0.01

对比 1.3,总请求数显著降低,平均响应时间显著增加,%95响应时间显著增加。性能显著降低了,又回到了 1.2 的水平。

1.5 增加线程数

在 1.4.1 的基础上,增加线程数:

$ sysbench --time=300 --threads=16 fileio --file-total-size=10G --file-num=2 --file-block-size=16K --file-test-mode=rndrw run --file-fsync-freq=1 --file-fsync-mode=fdatasync --file-extra-flags=direct --file-io-mode=async --file-async-backlog=256
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 16
Initializing random number generator from current time


Extra file open flags: directio
2 files, 5GiB each
10GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using asynchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      310.62
    writes/s:                     210.30
    fsyncs/s:                     420.70

Throughput:
    read, MiB/s:                  4.85
    written, MiB/s:               3.29

General statistics:
    total time:                          300.4850s
    total number of events:              282909

Latency (ms):
         min:                                    0.00
         avg:                                   16.97
         max:                                 3926.18
         95th percentile:                        0.06
         sum:                              4800671.84

Threads fairness:
    events (avg/stddev):           17681.8125/231.15
    execution time (avg/stddev):   300.0420/0.01
  • total number of events: 282909
  • avg: 16.97
  • max: 3926.18
  • 95th percentile: 0.06
  • events (avg/stddev): 17681.8125/231.15

对比 1.4.1,总请求数显著增加,平均响应时间和最大响应时间显著增加,95%响应时间略有降低,线程平均请求数大幅下降。性能是否提升取决于对总请求数和响应时间的权衡取舍。

2、关闭文件系统缓存和 swappines

先查看内存状态,注意缓冲/缓存(buff/cache)。

$ free -m
              total        used        free      shared  buff/cache   available
Mem:           3770         176        3331          11         262        3370
Swap:          2047           0        2047

关闭文件系统缓存(默认值为 0 ,表示不释放)。为了安全关闭前需先 sync 以下。:

$ sync
$ sysctl vm.drop_caches=1
# 或
$ echo 1 > /proc/sys/vm/drop_caches

再次查看内存状态,注意到缓冲/缓存变小了。

$ free -m
              total        used        free      shared  buff/cache   available
Mem:           3770         164        3549          11          56        3454
Swap:          2047           0        2047

关闭 swappines (CentOS 默认值为 30 ,某些 Linux 为 60 )。

$ sysctl vm.swappiness=0
# 或
$ echo 0 > /proc/sys/vm/swappiness

然后再次执行 1.4.1 测试:

$ sysbench --time=300 --threads=2 fileio --file-total-size=10G --file-num=2 --file-block-size=16K --file-test-mode=rndrw run --file-fsync-freq=1 --file-fsync-mode=fdatasync --file-extra-flags=direct --file-io-mode=async --file-async-backlog=256
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 2
Initializing random number generator from current time


Extra file open flags: directio
2 files, 5GiB each
10GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 1 requests.
Calling fsync() at the end of test, Enabled.
Using asynchronous I/O mode
Doing random r/w test
Initializing worker threads...

Threads started!


File operations:
    reads/s:                      290.28
    writes/s:                     193.24
    fsyncs/s:                     386.49

Throughput:
    read, MiB/s:                  4.54
    written, MiB/s:               3.02

General statistics:
    total time:                          300.2884s
    total number of events:              261252

Latency (ms):
         min:                                    0.00
         avg:                                    2.30
         max:                                 1132.16
         95th percentile:                        0.05
         sum:                               599736.20

Threads fairness:
    events (avg/stddev):           130626.0000/1558.00
    execution time (avg/stddev):   299.8681/0.00
  • total number of events: 261252
  • avg: 2.30
  • max: 1132.16
  • 95th percentile: 0.05

对比 1.4.1,总请求数显著增加,平均响应时间略有减少,95%响应时间和最大响应时间略有增加。

3、优化磁盘

磁盘并不是单指传统的机械硬盘,而是指代所有物理存储,包括单个硬盘驱动器、RAID 阵列和 SSD 固态硬盘等。

3.1 hdparm 关闭写回缓存(writeback)

查看磁盘写回缓存的状态:

$ hdparm -W /dev/sda

/dev/sda:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 write-caching = not supported

发现不支持写回缓存,所以没必要关闭了。

3.2 使用其他块大小的文件系统和物理磁盘

此处仅提供一个思路,使用 16K 块大小的固态硬盘(最好是使用 PCIe4.0 或 5.0 接口的),目的是与文件(或文件系统)块大小对齐。笔者暂时没有合适的测试环境, 读者如有可自行测试。


4、总结

本文仅抛砖引玉,暂时得到的结论是使用 direct + async-backlog ,增加线程数,关闭文件系统缓存和 swappines,优化磁盘,都是对 I/O 性能优化的有效手段。这些参数具体应该配置为多少合适,需要读者自行基准测试,并权衡指标的关键程度获得。

  • 32
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

独上西楼影三人

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值