828华为云征文|华为云Flexus云服务器X实例下的Redis性能基准测试

Flexus云服务器X实例介绍

华为云Flexus云服务是专为中小企业和开发者设计的卓越云解决方案,提供出色的开箱即用体验和显著的性能提升。其中,Flexus云服务器X实例作为这一服务中的杰出代表,展现了卓越的能力。

Flexus云服务器X实例是针对中小企业和开发者的下一代灵活计算云服务器,能够智能地适应不同业务负载,特别适用于电商直播、企业网站、开发测试环境、游戏服务器及音视频服务等中低负载场景。与Flexus应用服务器L实例相比,X实例不仅提供了丰富的公共镜像选项,还支持灵活的vCPU和内存配置,并能智能调整资源,以满足更高的负载需求

华为云Flexus云服务器X实例性能强悍,内嵌智能应用调优算法,底层多重调优加速,凝结华为技术专家多年经验,让Flexus X实例的基础模式GeekBench单核及多核跑分可达业界同规格独享型实例1.6倍;Flexus X实例的性能模式在相同性能和可靠性SLA下可超过业界C系/G系/R系,以及S系旗舰型云主机的性能。

测评关注点

华为云的Flexus提供了开箱即用体验,独有X-Turbo加速、大模型底层智能调度等黑科技,Flexus X实例独家提供业务关键应用的加速功能。在Flexus X实例上部署的MySQL、Redis、Nginx,性能最高可达业界同规格独享型实例的6倍(MySQL性能),长时运行可达2倍。

本次测评就拿Flexus X实例提供的Redis来开刀进行测试,看看基准测试性能表现如何。

关于Redis Benchmark工具

Redis Benchmark(redis-benchmark)是Redis官方提供的一个用于测试 Redis 性能的工具。它可以帮助开发者和运维人员评估 Redis 服务器的响应速度和处理能力。

主要功能:

  1. 性能测试:通过模拟多种类型的 Redis 操作(如 GET、SET、LPUSH 等),评估 Redis 在不同负载下的性能表现。
  2. 并发测试:支持指定并发连接数,模拟多个客户端同时对 Redis 进行操作,测试其在高并发情况下的表现。
  3. 结果分析:提供请求的响应时间、每秒请求数等统计信息,便于用户了解 Redis 的性能瓶颈

Redis官方出品的基准性能测试工具,想必大家都不必担心基准测试的公平性吧!

测试环境

操作系统:Huawei Cloud EulerOS 2.0 标准版 64位

CPU:4核心

内存:12G

硬盘:100G
Redis版本 v6.2.7

说明:使用了Flexus X实例提供的Redis加速

开始测试

GET/SET 基准测试

直接来30W的总数据,1000个客户端写入测试看看效果

[root@flexusx-45d1 ~]# redis-benchmark  -c 1000 -n 300000 -t get,set

====== SET ======                                                     

  300000 requests completed in 1.74 seconds

  1000 parallel clients

  3 bytes payload

  keep alive: 1

  host configuration "save": 3600 1 300 100 60 10000

  host configuration "appendonly": no

  multi-thread: no



Latency by percentile distribution:

0.000% <= 1.031 milliseconds (cumulative count 2)

50.000% <= 2.911 milliseconds (cumulative count 157216)

75.000% <= 2.975 milliseconds (cumulative count 229832)

87.500% <= 3.031 milliseconds (cumulative count 264004)

93.750% <= 3.103 milliseconds (cumulative count 281690)

96.875% <= 3.191 milliseconds (cumulative count 290866)

98.438% <= 3.271 milliseconds (cumulative count 295405)

99.219% <= 3.447 milliseconds (cumulative count 297675)

99.609% <= 4.239 milliseconds (cumulative count 298833)

99.805% <= 4.871 milliseconds (cumulative count 299420)

99.902% <= 5.215 milliseconds (cumulative count 299712)

99.951% <= 5.511 milliseconds (cumulative count 299857)

99.976% <= 5.647 milliseconds (cumulative count 299929)

99.988% <= 5.743 milliseconds (cumulative count 299964)

99.994% <= 5.855 milliseconds (cumulative count 299982)

99.997% <= 6.655 milliseconds (cumulative count 299991)

99.998% <= 6.679 milliseconds (cumulative count 299996)

99.999% <= 6.695 milliseconds (cumulative count 299998)

100.000% <= 6.783 milliseconds (cumulative count 299999)

100.000% <= 6.791 milliseconds (cumulative count 300000)

100.000% <= 6.791 milliseconds (cumulative count 300000)



Cumulative distribution of latencies:

0.000% <= 0.103 milliseconds (cumulative count 0)

0.010% <= 1.103 milliseconds (cumulative count 31)

0.023% <= 1.207 milliseconds (cumulative count 68)

0.032% <= 1.303 milliseconds (cumulative count 96)

0.043% <= 1.407 milliseconds (cumulative count 128)

0.054% <= 1.503 milliseconds (cumulative count 162)

0.066% <= 1.607 milliseconds (cumulative count 199)

0.077% <= 1.703 milliseconds (cumulative count 232)

0.083% <= 1.807 milliseconds (cumulative count 248)

0.096% <= 1.903 milliseconds (cumulative count 289)

0.119% <= 2.007 milliseconds (cumulative count 357)

0.140% <= 2.103 milliseconds (cumulative count 420)

93.897% <= 3.103 milliseconds (cumulative count 281690)

99.555% <= 4.103 milliseconds (cumulative count 298664)

99.878% <= 5.103 milliseconds (cumulative count 299633)

99.997% <= 6.103 milliseconds (cumulative count 299990)

100.000% <= 7.103 milliseconds (cumulative count 300000)



Summary:

  throughput summary: 172413.80 requests per second

  latency summary (msec):

          avg       min       p50       p95       p99       max

        2.921     1.024     2.911     3.135     3.367     6.791

====== GET ======                                                     

  300000 requests completed in 1.75 seconds

  1000 parallel clients

  3 bytes payload

  keep alive: 1

  host configuration "save": 3600 1 300 100 60 10000

  host configuration "appendonly": no

  multi-thread: no



Latency by percentile distribution:

0.000% <= 1.007 milliseconds (cumulative count 1)

50.000% <= 2.935 milliseconds (cumulative count 156417)

75.000% <= 2.999 milliseconds (cumulative count 232359)

87.500% <= 3.039 milliseconds (cumulative count 265842)

93.750% <= 3.079 milliseconds (cumulative count 283317)

96.875% <= 3.127 milliseconds (cumulative count 290731)

98.438% <= 3.215 milliseconds (cumulative count 295570)

99.219% <= 3.431 milliseconds (cumulative count 297679)

99.609% <= 4.639 milliseconds (cumulative count 298830)

99.805% <= 7.271 milliseconds (cumulative count 299416)

99.902% <= 9.535 milliseconds (cumulative count 299708)

99.951% <= 10.831 milliseconds (cumulative count 299854)

99.976% <= 11.431 milliseconds (cumulative count 299927)

99.988% <= 11.799 milliseconds (cumulative count 299964)

99.994% <= 11.991 milliseconds (cumulative count 299982)

99.997% <= 12.087 milliseconds (cumulative count 299991)

99.998% <= 12.151 milliseconds (cumulative count 299996)

99.999% <= 12.183 milliseconds (cumulative count 299998)

100.000% <= 12.191 milliseconds (cumulative count 299999)

100.000% <= 12.207 milliseconds (cumulative count 300000)

100.000% <= 12.207 milliseconds (cumulative count 300000)



Cumulative distribution of latencies:

0.000% <= 0.103 milliseconds (cumulative count 0)

0.000% <= 1.007 milliseconds (cumulative count 1)

0.013% <= 1.103 milliseconds (cumulative count 39)

0.024% <= 1.207 milliseconds (cumulative count 73)

0.038% <= 1.303 milliseconds (cumulative count 114)

0.058% <= 1.407 milliseconds (cumulative count 175)

0.081% <= 1.503 milliseconds (cumulative count 242)

0.089% <= 1.607 milliseconds (cumulative count 266)

0.095% <= 1.703 milliseconds (cumulative count 285)

0.098% <= 1.807 milliseconds (cumulative count 294)

0.102% <= 1.903 milliseconds (cumulative count 305)

0.106% <= 2.007 milliseconds (cumulative count 317)

0.113% <= 2.103 milliseconds (cumulative count 338)

95.835% <= 3.103 milliseconds (cumulative count 287505)

99.459% <= 4.103 milliseconds (cumulative count 298376)

99.695% <= 5.103 milliseconds (cumulative count 299085)

99.753% <= 6.103 milliseconds (cumulative count 299258)

99.798% <= 7.103 milliseconds (cumulative count 299393)

99.843% <= 8.103 milliseconds (cumulative count 299528)

99.882% <= 9.103 milliseconds (cumulative count 299646)

99.923% <= 10.103 milliseconds (cumulative count 299769)

99.962% <= 11.103 milliseconds (cumulative count 299885)

99.997% <= 12.103 milliseconds (cumulative count 299992)

100.000% <= 13.103 milliseconds (cumulative count 300000)



Summary:

  throughput summary: 171135.19 requests per second

  latency summary (msec):

          avg       min       p50       p95       p99       max

        2.942     1.000     2.935     3.095     3.287    12.207

GET/SET 开启多线程的基准测试

在上面的基准测试中,还没有使用多线程的能力,接下来开启使用多线程来进行GET/SET的测试,看看QPS在多少。

[root@flexusx-45d1 ~]# redis-benchmark --threads 4 -c 1000 -n 300000 -t get,set

====== SET ======                                                     

  300000 requests completed in 1.26 seconds

  1000 parallel clients

  3 bytes payload

  keep alive: 1

  host configuration "save": 3600 1 300 100 60 10000

  host configuration "appendonly": no

  multi-thread: yes

  threads: 4



Latency by percentile distribution:

0.000% <= 0.847 milliseconds (cumulative count 1)

50.000% <= 3.407 milliseconds (cumulative count 155439)

75.000% <= 3.511 milliseconds (cumulative count 228149)

87.500% <= 3.671 milliseconds (cumulative count 262828)

93.750% <= 4.791 milliseconds (cumulative count 281268)

96.875% <= 6.783 milliseconds (cumulative count 290681)

98.438% <= 6.927 milliseconds (cumulative count 295410)

99.219% <= 7.063 milliseconds (cumulative count 297758)

99.609% <= 7.175 milliseconds (cumulative count 298873)

99.805% <= 7.319 milliseconds (cumulative count 299420)

99.902% <= 7.615 milliseconds (cumulative count 299711)

99.951% <= 8.127 milliseconds (cumulative count 299857)

99.976% <= 9.335 milliseconds (cumulative count 299927)

99.988% <= 9.447 milliseconds (cumulative count 299966)

99.994% <= 9.535 milliseconds (cumulative count 299983)

99.997% <= 10.663 milliseconds (cumulative count 299991)

99.998% <= 10.703 milliseconds (cumulative count 299997)

99.999% <= 10.711 milliseconds (cumulative count 299998)

100.000% <= 10.719 milliseconds (cumulative count 299999)

100.000% <= 10.727 milliseconds (cumulative count 300000)

100.000% <= 10.727 milliseconds (cumulative count 300000)



Cumulative distribution of latencies:

0.000% <= 0.103 milliseconds (cumulative count 0)

0.003% <= 0.903 milliseconds (cumulative count 9)

0.008% <= 1.007 milliseconds (cumulative count 23)

0.012% <= 1.103 milliseconds (cumulative count 36)

0.020% <= 1.207 milliseconds (cumulative count 61)

0.029% <= 1.303 milliseconds (cumulative count 87)

0.041% <= 1.407 milliseconds (cumulative count 124)

0.054% <= 1.503 milliseconds (cumulative count 161)

0.065% <= 1.607 milliseconds (cumulative count 196)

0.082% <= 1.703 milliseconds (cumulative count 245)

0.107% <= 1.807 milliseconds (cumulative count 321)

0.135% <= 1.903 milliseconds (cumulative count 405)

0.150% <= 2.007 milliseconds (cumulative count 451)

0.164% <= 2.103 milliseconds (cumulative count 491)

4.934% <= 3.103 milliseconds (cumulative count 14802)

90.753% <= 4.103 milliseconds (cumulative count 272258)

94.011% <= 5.103 milliseconds (cumulative count 282033)

94.614% <= 6.103 milliseconds (cumulative count 283841)

99.410% <= 7.103 milliseconds (cumulative count 298231)

99.948% <= 8.103 milliseconds (cumulative count 299845)

99.968% <= 9.103 milliseconds (cumulative count 299903)

99.994% <= 10.103 milliseconds (cumulative count 299983)

100.000% <= 11.103 milliseconds (cumulative count 300000)



Summary:

  throughput summary: 238473.77 requests per second

  latency summary (msec):

          avg       min       p50       p95       p99       max

        3.608     0.840     3.407     6.375     7.015    10.727

====== GET ======                                                     

  300000 requests completed in 1.26 seconds

  1000 parallel clients

  3 bytes payload

  keep alive: 1

  host configuration "save": 3600 1 300 100 60 10000

  host configuration "appendonly": no

  multi-thread: yes

  threads: 4



Latency by percentile distribution:

0.000% <= 1.295 milliseconds (cumulative count 1)

50.000% <= 3.335 milliseconds (cumulative count 151509)

75.000% <= 3.431 milliseconds (cumulative count 226229)

87.500% <= 3.607 milliseconds (cumulative count 262983)

93.750% <= 4.327 milliseconds (cumulative count 281314)

96.875% <= 6.559 milliseconds (cumulative count 290762)

98.438% <= 6.807 milliseconds (cumulative count 295466)

99.219% <= 6.903 milliseconds (cumulative count 297683)

99.609% <= 7.015 milliseconds (cumulative count 298851)

99.805% <= 7.207 milliseconds (cumulative count 299415)

99.902% <= 7.495 milliseconds (cumulative count 299709)

99.951% <= 7.975 milliseconds (cumulative count 299855)

99.976% <= 8.255 milliseconds (cumulative count 299928)

99.988% <= 8.479 milliseconds (cumulative count 299964)

99.994% <= 8.591 milliseconds (cumulative count 299982)

99.997% <= 10.959 milliseconds (cumulative count 299991)

99.998% <= 10.983 milliseconds (cumulative count 299996)

99.999% <= 11.015 milliseconds (cumulative count 299998)

100.000% <= 11.023 milliseconds (cumulative count 299999)

100.000% <= 11.031 milliseconds (cumulative count 300000)

100.000% <= 11.031 milliseconds (cumulative count 300000)



Cumulative distribution of latencies:

0.000% <= 0.103 milliseconds (cumulative count 0)

0.001% <= 1.303 milliseconds (cumulative count 2)

0.010% <= 1.407 milliseconds (cumulative count 31)

0.022% <= 1.503 milliseconds (cumulative count 66)

0.035% <= 1.607 milliseconds (cumulative count 105)

0.056% <= 1.703 milliseconds (cumulative count 168)

0.077% <= 1.807 milliseconds (cumulative count 230)

0.083% <= 1.903 milliseconds (cumulative count 250)

0.085% <= 2.007 milliseconds (cumulative count 256)

0.129% <= 2.103 milliseconds (cumulative count 388)

9.830% <= 3.103 milliseconds (cumulative count 29489)

92.365% <= 4.103 milliseconds (cumulative count 277095)

94.554% <= 5.103 milliseconds (cumulative count 283662)

95.195% <= 6.103 milliseconds (cumulative count 285584)

99.710% <= 7.103 milliseconds (cumulative count 299130)

99.961% <= 8.103 milliseconds (cumulative count 299883)

99.995% <= 9.103 milliseconds (cumulative count 299986)

100.000% <= 11.103 milliseconds (cumulative count 300000)



Summary:

  throughput summary: 238853.50 requests per second

  latency summary (msec):

          avg       min       p50       p95       p99       max

        3.513     1.288     3.335     6.015     6.871    11.031

测试结论:

华为云Flexus X实例Redis测试数据

多线程

GET

SET

未开启

171135.19

172413.80

开启

238853.50

238473.77

本次仅仅测试GET/SET的性能,没有做到全面的数据类型,但是并不妨碍总体的结果。这个效果还是非常地恐怖,出乎意料。在未开启多线程读写的情况下QPS居然达到了17W多每秒。开启了多线程读写之后,性能更是恐怖的达到了24W每秒。这个结果比最初开始了解Redis读写10W的标准整整提升了1倍多。华为云Flexus X实例Redis加速果然有点厉害,有点东西。

总结:

对于更多的中小企业来说,想要对Redis/Mysql/Nginx之类的中间件进行调优是一件非常困难的事情,华为云直接开箱即用的提供Flexus X实例加速,既降低了企业的技术门槛也提高了企业应用性能,真正地做到了降本增效,增质提速。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值