ZFS,XFS,and EXT4 filesystems compared

This is my attempt to cut through the hype and uncertainty to find a storage subsystem that works. I compared XFS and EXT4 under Linux with ZFS under OpenSolaris. The machine is a 2200MHz Athlon 64 with 1GB of memory and 8 400GB disks attached to an Areca ARC-1220 SATA controller. Linux is 2.6.22.5 and OpenSolaris is 5.11 snv_70 "Nevada".

Aside from the different kernels and filesystems, I tested internal and external journal devices and software and hardware RAIDs. Software RAIDs are "raid-10 near2" with 6 disks on Linux. On Solaris the zpool is created with three mirrors of two disks each. Hardware RAIDs use the Areca's RAID-10 for both Linux and Solaris. Drive caches are disabled throughout, but the battery-backed cache on the controller is enabled when using hardware RAID. If an external journal is used, it is on a single disk with write caching disabled. The system is installed on the remaining disk, which is not involved in the testing.

XFS and ZFS filesystems were created with default parameters. EXT4 filesystems are created with -b 4096, to force 4096 block size when using the external journal. All filesystems were mounted with atime disabled. EXT4 is mounted with extents.

The benchmark uses Bonnie++ 1.03a, randomio 1.3, FFSB 5.2.1, postmark 1.51, dd, tar, and cpio with a tarball of Linux 2.6.22.5. FFSB and randomio were patched to build on Solaris, and to use the Solaris equivalent of Linux's O_DIRECT. The test proceeded as follows:

  1. bonnie++
  2. copy linux kernel tarball to filesystem under test
  3. untar it
  4. dd if=/dev/zero of=bigfile bs=1048576 count=4096
  5. randomio bigfile 10 .25 .01 2048 60 1
  6. find linux-2.6.22.5 | cpio -pdm linux
  7. postmark 1, 10, and 100
  8. ffsb read, write, mixed, and huge
  9. tar -cf linux.tar linux

The results are below. EXT4 is fast for metadata operations, tar, untar, cpio, and postmark. EXT4 is much faster than the others under FFSB. EXT4 with hardware RAID and external journal device is ludicrously fast. EXT4 seems to have a bad interaction with software RAID, probably because mkfs fails to query the RAID layout when setting the filesystem parameters.

ZFS has excellent performance on metadata tests. ZFS has very bad sequential transfer with hardware RAID and appalling sequential transfer with software RAID. ZFS can copy the linux kernel source code in only 3 seconds! ZFS has equal latency for read and write requests under mixed loads, which is good.

XFS has good sequential transfer under Bonnie++. Oddly XFS has better sequential reads when using an external journal, which makes little sense. Is noatime broken on XFS? XFS is very slow on all the metadata tests. XFS takes the RAID layout into consideration and it performs well on randomio with hardware or software RAID.

The Areca controller seems to have some problem with large requests. The "huge" FFSB test does reads and writes in blocks of 1MiB. The hardware RAID is only half as fast as the software RAID in this test. To check that there's nothing weird about the Areca driver for Solaris, I spot-checked sequential transfer using dd, and rates were in excess of 150MB/s.

 xfs/internal/swxfs/external/swxfs/internal/hwxfs/external/hwext4/internal/swext4/external/swext4/internal/hwext4/external/hwzfs/internal/swzfs/external/swzfs/internal/hwzfs/external/hw
bonnie            
read MB/s1291412022051211201931955355168166
write MB/s15615518718516217616015695778080
mixed MB/s454670695354737137365450
sequential create /s38741887173075238>99999>9999915240>9999922975>99999>99999>99999
sequential delete /s1958202599462466>99999>99999>99999>99999>99999>99999>99999>99999
random create /s39092208125992754>99999>9999915058>99999>9999922269>9999922067
random delete /s3366199369631836>99999>99999>99999>99999>99999>9999915542>99999
randomio            
create 4GB file MB/s12512611010214414312314176757665
random io/s408381550496192190605609443451304318
random read latency ms111022244949202023223331
random write latency ms66757761626623243232
random read latency std dev ms11850523838544732276338
random write latency std dev ms546053534040564643614843
ffsb            
multithreaded read io/s534542488488107910585491018496511300319
multithreaded write io/s709811816866108173175523314534
multithreaded mixed io/s241282294408316301447601290300282244
multithreaded mixed huge io/s437543893563348070777928386245296323572727663119
kernel            
untar kernel s694211104511415182118
copy kernel s1941972132122733702643384
tar kernel s16116221321581127730263132
postmark 1            
read KB/s421536167078153406680890089002671013360133608900
write KB/s13101660530024201697021210282802828084830424104241028280
postmark 10            
read KB/s515817226011803880271054309050905013570905013570
write KB/s1580250070903700121508510170202836028360425402836042540
postmark 100            
read KB/s59110103020194011308681357090506780905090509050
write KB/s181031509450608035502660425402836021270283602836028360
 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值