目录
一、概述
每秒请求数 | |
---|---|
Nginx 服务器性能 | 50000 – 60000 次/秒 |
优化 | 100000 次/秒 |
二、优化
2.1 配置 CPU
6核心配置
worker_processes 6;
worker_cpu_affinity 0001 0010 0100 1000 1001 10010;
8核心配置
worker_processes 8;
worker_cpu_affinity 00000001 00000010 00000100 0000100000010000 00100000 01000000 10000000;
配置
vi /appdata/nginx/conf/nginx.conf
2.2 配置http模块
sendfile on;
tcp_nopush on;
client_max_body_size 1024m;
client_body_buffer_size 10m;
client_header_buffer_size 10m;
proxy_buffers 4 128k;
proxy_busy_buffers_size 128k;
open_file_cache max=102400 inactive=20s;
#这个将为打开文件指定缓存,默认是没有启用的,max指定缓存数量,建议和打开文件数一致,inactive是指经过多长
时间文件没被请求后删除缓存。
open_file_cache_valid 30s;
keepalive_timeout 60;
2.3 优化Linux 系统
vi /etc/sysctl.conf
文件 sysctl.conf
vm.swappiness=0
#增加tcp支持的队列数
net.ipv4.tcp_max_syn_backlog = 262144
#减少断开连接时 ,资源回收
net.ipv4.tcp_max_tw_buckets = 8000
# 表示开启重用。
# 允许将TIME-WAIT sockets重新用于新的TCP连接,默认为0,表示关闭;
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
# 表示如果套接字由本端要求关闭,这个参数决定了它保持在FIN-WAIT-2状态的时间。
net.ipv4.tcp_fin_timeout = 10
#改变本地的端口范围
net.ipv4.ip_local_port_range = 1024 65535
#对于只在本地使用的数据库服务器
net.ipv4.tcp_fin_timeout = 1
#端口监听队列
net.core.somaxconn=65535
#接受数据的速率
net.core.netdev_max_backlog=65535
net.core.wmem_default=87380
net.core.wmem_max=16777216
net.core.rmem_default=87380
net.core.rmem_max=16777216
# 开启SYN Cookies。
# 当出现SYN等待队列溢出时,启用cookies来处理,可防范少量SYN攻击,
# 默认为0,表示关闭;
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_max_orphans = 262144
vi /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
* soft noproc 65535
* hard noproc 65535
ulimit -n 65536
# 三、压测

工具安装教程:https://blog.csdn.net/qq_32415063/article/details/105896406
## 3.1 压测,20万
ab -n 200000 -c 5000 http://192.168.23.129:80/index.html
报告
```report
Server Software: nginx/1.18.0
Server Hostname: 192.168.23.129
Server Port: 80
Document Path: /index.html
Document Length: 612 bytes
Concurrency Level: 5000
Time taken for tests: 82.459 seconds
Complete requests: 200000
Failed requests: 0
Write errors: 0
Total transferred: 168999155 bytes
HTML transferred: 122399388 bytes
Requests per second: 2425.44 [#/sec] (mean)
Time per request: 2061.482 [ms] (mean)
Time per request: 0.412 [ms] (mean, across all concurrent requests)
Transfer rate: 2001.45 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 875 2739.5 78 31732
Processing: 3 209 722.7 77 61598
Waiting: 0 206 709.2 76 36979
Total: 3 1084 2849.8 180 68689
Percentage of the requests served within a certain time (ms)
50% 180
66% 458
75% 1130
80% 1169
90% 3102
95% 3552
98% 7512
99% 15209
100% 68689 (longest request)
3.2 压测,1万
ab -n 1000000 -c 100 http://172.16.1.140/tmp/1k.txt
3.2.1 测试环境摘要
服务器平台:dell r720
处理器: Intel Xeon E5-2609 v2 @ 2.50GHz *2 (总计8c )
内存:128G
硬盘:3T raid5
操作系统:CentOS7.2 x64
测试机:
处理器:Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz,
内存16G 操作系统:Fedora 22 x64
测试工具:Apache bench
工具安装:sudo dnf install httpd-tools -y
客户端与服务器用一个千兆交换机互联
调整服务器配置:
# /etc/sysctl.conf
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_fin_timeout=30
net.core.netdev_max_backlog=20000
net.core.somaxconn=20000
net.ipv4.tcp_max_orphans=20000
net.ipv4.tcp_max_syn_backlog=20000
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_synack_retries=1
net.ipv4.tcp_syn_retries=1
# /etc/security/limits.conf
* soft nofile 10000
* hard nofile 10000
静态资源放在/tmp(内存文件系统上) 关掉access log 对性能没有影响
服务器网口:4个,由于CPU还有很大余量,预计4个网口同时使用(公网dns负载均衡),10k测试样本能达到接近3.6Gbit/s吞吐量且CPU接近饱和
由于测试样本位于/tmp,因此磁盘io对本次测试无明显影响
3.2.2 测试报告
1k.txt
Document Path: /tmp/1k.txt
Document Length: 1024 bytes
Concurrency Level: 100
Time taken for tests: 25.521 seconds
Complete requests: 1000000
Failed requests: 0
Total transferred: 1274000000 bytes
HTML transferred: 1024000000 bytes
Requests per second: 39183.77 [#/sec] (mean)
Time per request: 2.552 [ms] (mean)
Time per request: 0.026 [ms] (mean, across all concurrent requests)
Transfer rate: 48750.12 [Kbytes/sec] received
服务器网卡目测平均:230Mbit/s
2k.txt
Document Path: /tmp/2k.txt
Document Length: 2048 bytes
Concurrency Level: 100
Time taken for tests: 25.245 seconds
Complete requests: 1000000
Failed requests: 0
Total transferred: 2298000000bytes
HTML transferred: 2048000000 bytes
Requests per second: 39611.50 [#/sec] (mean)
Time per request: 2.525[ms] (mean)
Time per request: 0.025[ms] (mean, across all concurrent requests)
Transfer rate: 88893.78[Kbytes/sec] received
服务器网卡目测平均:330Mbit/s
3k.txt
Server Software: nginx/1.11.6
Server Hostname: 172.16.1.140 Server Port: 80
Document Path: /tmp/3k.txt
Document Length: 3072 bytes
Concurrency Level: 100
Time taken for tests: 31.855 seconds
Complete requests: 1000000
Failed requests: 0
Total transferred: 3322000000 bytes
HTML transferred: 3072000000 bytes
Requests per second: 31392.34 [#/sec] (mean)
Time per request: 3.185 [ms] (mean)
Time per request: 0.032 [ms] (mean, across all concurrent requests)
Transfer rate:101841.16 [Kbytes/sec] received
服务器网卡目测平均:491Mbit/s
4k.txt
Document Path: /tmp/4k.txt
Document Length: 4096 bytes
Concurrency Level: 100
Time taken for tests: 39.808 seconds
Complete requests: 1000000
Failed requests: 0
Total transferred: 4347000000 bytes
HTML transferred: 4096000000 bytes
Requests per second: 25120.48 [#/sec] (mean)
Time per request: 3.981 [ms] (mean)
Time per request: 0.040 [ms] (mean, across all concurrent requests)
Transfer rate: 106639.38 [Kbytes/sec] received
服务器网卡目测平均:920Mbit/s
10k.txt
Document Path: /tmp/10k.txt
Document Length: 10240 bytes
Concurrency Level: 100
Time taken for tests: 92.317 seconds
Complete requests: 1000000
Failed requests: 0
Total transferred: 10492000000 bytes
HTML transferred: 10240000000 bytes
Requests per second: 10832.30 [#/sec] (mean)
Time per request: 9.232 [ms] (mean)
Time per request: 0.092 [ms] (mean, across all concurrent requests)
Transfer rate: 110988.73 [Kbytes/sec] received
服务器网卡目测平均:955Mbit/s
3.2 压测,140万QPS
worker_processes 32;
worker_cpu_affinity auto;
worker_connections 102400
16 核的时候,不需要开启RPS特性(RPS和RFS网卡多队列),就可以把所有 CPU 打满,网络达到极限;但是测试 32 核的时候,需要开启 RPS。
nginx性能测试结论
nginx优化的方法三种,第一种优化linux内核参数,使内核变的更为强大,第二种是优化nginx配置文件,使nginx变的更为强大, 第三种是扩展服务器的cpu和内存,使服务器变的更为强大。
单机测试:
单机8核cpu的平均在30000QPS, 1万并发连接数平均消耗40%cpu。
nginx并发数与cpu核数有关,cpu核数到达88核可以实现百万QPS数量。
并发连接数达到8000 ~ 10000 开始有很少量的error,并发连接数达到20000 error 数量开始上升。
参考内存配置要求:
在操作系统层面每个TCP连接会占用3k-10k的内存,以20万来计算,需要2G内存。nginx程序本身还要消耗内存,特别是nginx反向代理POST请求比较多的情况,20万连接情况下推荐16G内存配置。
参考:
- https://blog.csdn.net/qq_32415063/article/details/105888045
- https://zhuanlan.zhihu.com/p/35449671
- https://blog.csdn.net/qq_27384769/article/details/107856114