最近因为公司的一个业务迁移,需要对单机MongoDB做一个简单的测试,在写入了亿万条数据之后,数据库的性能还是受到了一些影响的,这里简单记录下。
因为是非关键业务,且通过统计得出每秒的写入请求数大约是 700 ,每秒的读取请求数大约是10 ,写入请求的入库对于实时性没有太高的要求,所以整个测试不会追求极致性能。
01
测试机器的基本信息如下,双核,8GB 内存。磁盘方面,物理机上是SAS接口的机械硬盘,使用dd命令测试吞吐量大约为160MB/s。
----------------------------------------------------------------------
CPU Model : Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS)
CPU Cores : 2
CPU Frequency : 2799.988 MHz
CPU Cache : 16384 KB
Total Disk : 485.1 GB (1.6 GB Used)
Total Mem : 7962 MB (146 MB Used)
Total Swap : 0 MB (0 MB Used)
System uptime : 0 days, 16 hour 2 min
Load average : 0.00, 0.00, 0.00
OS : Ubuntu 20.04.2 LTS
Arch : x86_64 (64 Bit)
Kernel : 5.4.0-72-generic
TCP CC : cubic
Virtualization : KVM
----------------------------------------------------------------------
I/O Speed(1st run) : 122 MB/s
I/O Speed(2nd run) : 174 MB/s
I/O Speed(3rd run) : 182 MB/s
Average I/O speed : 159.3 MB/s
使用fio命令测试磁盘的读取iops
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
fio-3.16
Starting 1 process
test: (groupid=0, jobs=1): err= 0: pid=3987: Fri Apr 23 03:52:49 2021
read: IOPS=80.1k, BW=313MiB/s (328MB/s)(9385MiB/30001msec)
slat (usec): min=3, max=8935, avg= 9.72, stdev= 9.28
clat (usec): min=23, max=52148, avg=88.84, stdev=225.19
lat (usec): min=28, max=52152, avg=98.79, stdev=225.39
clat percentiles (usec):
| 1.00th=[ 57], 5.00th=[ 71], 10.00th=[ 77], 20.00th=[ 84],
| 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 86], 60.00th=[ 87],
| 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 100], 95.00th=[ 106],
| 99.00th=[ 133], 99.50th=[ 147], 99.90th=[ 223], 99.95th=[ 239],
| 99.99th=[ 4686]
bw ( KiB/s): min=287584, max=334432, per=99.82%, avg=319763.93, stdev=12882.10, samples=59
iops : min=71896, max=83608, avg=79941.02, stdev=3220.44, samples=59
lat (usec) : 50=0.42%, 100=90.11%, 250=9.43%, 500=0.02%, 750=0.01%
lat (usec) : 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%
cpu : usr=18.94%, sys=78.77%, ctx=10511, majf=0, minf=21
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=2402656,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=8
Run status group 0 (all jobs):
READ: bw=313MiB/s (328MB/s), 313MiB/s-313MiB/s (328MB/s-328MB/s), io=9385MiB (9841MB), run=30001-30001msec
Disk stats (read/write):
vda: ios=2391051/13, merge=0/14, ticks=75960/8, in_queue=3252, util=99.78%
已知生产环境中该业务的每秒写入请求数大约是700,单条入库数据的大小平均下来是2KB,那么大约是 1.4MB/s 的写入速度,所以只要是个正常硬盘,磁盘IO这块应该都不是问题。
MongoDB安装的是4.4.5版本,所有参数都为默认值。
db.version()
4.4.5
db.serverStatus().storageEngine.name
wiredTiger
默认情况下8GB的内存 MongoDB会使用(8-1)/2 = 3.5GB。
官方说明:
https://docs.mongodb.com/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.cacheSizeGB
storage.wiredTiger.engineConfig.cacheSizeGB¶
Type: float
Defines the maxim