结论:Go HTTP standalone >(优于) Nginx proxy to Go HTTP >(优于) Nginx fastcgi to Go TCP FastCGI
原文链接:http://www.oschina.net/translate/benchmarking-nginx-with-go?from=20131222
英文链接:https://gist.github.com/hgfischer/7965620
目前有很多提供Go语言HTTP应用服务的方法,但其中最好的选择取决于每个应用的实际情况。目前,Nginx看起来是每个新项目的标准Web服务器,即使在有其他许多不错Web服务器的情况下。然而,在Nginx上提供Go应用服务的开销是多少呢?我们需要一些nginx的特性参数(vhosts,负载均衡,缓存,等等)或者直接使用Go提供服务?如果你需要nginx,最快的连接机制是什么?这就是在这我试图回答的问题。该基准测试的目的不是要验证Go比nginx的快或慢。那将会很愚蠢。
下面是我们要比较不同的设置:
- Go HTTP standalone (as the control group)
- Nginx proxy to Go HTTP
- Nginx fastcgi to Go TCP FastCGI
- Nginx fastcgi to Go Unix Socket FastCGI
硬件
因为我们将在相同的硬件下比较所有设置,硬件选择的是廉价的一个。这不应该是一个大问题。
- Samsung 笔记本 NP550P5C-AD1BR
- Intel Core i7 3630QM @2.4GHz (quad core, 8 threads)
- CPU caches: (L1: 256KiB, L2: 1MiB, L3: 6MiB)
- RAM 8GiB DDR3 1600MHz
软件
- Ubuntu 13.10 amd64 Saucy Salamander (updated)
- Nginx 1.4.4 (1.4.4-1~saucy0 amd64)
- Go 1.2 (linux/amd64)
- wrk 3.0.4
设置
内核
只需很小的一点调整,将内核的limits调高。如果你对这一变量有更好的想法,请在写在下面评论处:
03 | net.core.netdev_max_backlog 4096 |
04 | net.core.rmem_max 16777216 |
05 | net.core.somaxconn 65535 |
06 | net.core.wmem_max 16777216 |
08 | net.ipv4.ip_local_port_range 1025 65535 |
09 | net.ipv4.tcp_fin_timeout 30 |
10 | net.ipv4.tcp_keepalive_time 30 |
11 | net.ipv4.tcp_max_syn_backlog 20480 |
12 | net.ipv4.tcp_max_tw_buckets 400000 |
13 | net.ipv4.tcp_no_metrics_save 1 |
14 | net.ipv4.tcp_syn_retries 2 |
15 | net.ipv4.tcp_synack_retries 2 |
16 | net.ipv4.tcp_tw_recycle 1 |
17 | net.ipv4.tcp_tw_reuse 1 |
18 | vm.min_free_kbytes 65536 |
19 | vm.overcommit_memory 1 |
Limits
供root和www-data打开的最大文件数限制被配置为200000。
Nginx
有几个必需得Nginx调整。有人跟我说过,我禁用了gzip以保证比较公平。下面是它的配置文件/etc/nginx/nginx.conf:
02 | worker_processes auto; |
03 | worker_rlimit_nofile 200000; |
04 | pid /var/run/nginx.pid; |
07 | worker_connections 10000; |
16 | keepalive_timeout 300; |
17 | keepalive_requests 10000; |
18 | types_hash_max_size 2048; |
20 | open_file_cache max=200000 inactive=300s; |
21 | open_file_cache_valid 300s; |
22 | open_file_cache_min_uses 2; |
23 | open_file_cache_errors on; |
28 | include /etc/nginx/mime.types; |
29 | default_type application/octet-stream; |
31 | access_log /var/log/nginx/access.log combined; |
32 | error_log /var/log/nginx/error.log warn; |
37 | include /etc/nginx/conf.d/*.conf; |
38 | include /etc/nginx/sites-enabled/*.conf; |
Nginx vhosts
02 | server 127.0.0.1:8080; |
10 | error_log /dev/null crit; |
13 | proxy_pass http://go_http; |
14 | proxy_http_version 1.1; |
15 | proxy_set_header Connection "" ; |
19 | upstream go_fcgi_tcp { |
20 | server 127.0.0.1:9001; |
26 | server_name go.fcgi.tcp; |
28 | error_log /dev/null crit; |
31 | include fastcgi_params; |
33 | fastcgi_pass go_fcgi_tcp; |
37 | upstream go_fcgi_unix { |
38 | server unix:/tmp/go.sock; |
44 | server_name go.fcgi.unix; |
46 | error_log /dev/null crit; |
49 | include fastcgi_params; |
51 | fastcgi_pass go_fcgi_unix; |
Go源码
25 | func (s Server) ServeHTTP(w http.ResponseWriter, r *http.Request) { |
26 | body := "Hello World\n" |
28 | w.Header().Set( "Server" , "gophr" ) |
29 | w.Header().Set( "Connection" , "keep-alive" ) |
30 | w.Header().Set( "Content-Type" , "text/plain" ) |
31 | w.Header().Set( "Content-Length" , fmt.Sprint(len(body))) |
36 | sigchan := make(chan os.Signal, 1) |
37 | signal .Notify(sigchan, os.Interrupt) |
38 | signal .Notify(sigchan, syscall.SIGTERM) |
43 | http.Handle( "/" , server) |
44 | if err := http.ListenAndServe( ":8080" , nil); err != nil { |
50 | tcp, err := net.Listen( "tcp" , ":9001" ) |
54 | fcgi.Serve(tcp, server) |
58 | unix, err := net.Listen( "unix" , SOCK) |
62 | fcgi.Serve(unix, server) |
67 | if err := os.Remove(SOCK); err != nil { |
检查HTTP header
为公平起见,所有的请求必需大小相同。
03 | Connection: keep-alive |
05 | Content-Type: text/plain |
07 | Date: Sun, 15 Dec 2013 14:59:14 GMT |
04 | Date: Sun, 15 Dec 2013 14:59:31 GMT |
05 | Content-Type: text/plain |
07 | Connection: keep-alive |
03 | Content-Type: text/plain |
05 | Connection: keep-alive |
06 | Date: Sun, 15 Dec 2013 14:59:40 GMT |
03 | Content-Type: text/plain |
05 | Connection: keep-alive |
06 | Date: Sun, 15 Dec 2013 15:00:15 GMT |
启动引擎
- 使用sysctl配置内核
- 配置Nginx
- 配置Nginx vhosts
- 用www-data启动服务
- 运行基准测试
基准测试
GOMAXPROCS = 1
Go standalone
3 | 100 threads and 5000 connections |
4 | Thread Stats Avg Stdev Max +/- Stdev |
5 | Latency 116.96ms 17.76ms 173.96ms 85.31% |
6 | Req/Sec 429.16 49.20 589.00 69.44% |
7 | 1281567 requests in 29.98s, 215.11MB read |
Nginx + Go through HTTP
3 | 100 threads and 5000 connections |
4 | Thread Stats Avg Stdev Max +/- Stdev |
5 | Latency 124.57ms 18.26ms 209.70ms 80.17% |
6 | Req/Sec 406.29 56.94 0.87k 89.41% |
7 | 1198450 requests in 29.97s, 201.16MB read |
Nginx + Go through FastCGI TCP
03 | 100 threads and 5000 connections |
04 | Thread Stats Avg Stdev Max +/- Stdev |
05 | Latency 514.57ms 119.80ms 1.21s 71.85% |
06 | Req/Sec 97.18 22.56 263.00 79.59% |
07 | 287416 requests in 30.00s, 48.24MB read |
08 | Socket errors: connect 0, read 0, write 0, timeout 661 |
Nginx + Go through FastCGI Unix Socket
03 | 100 threads and 5000 connections |
04 | Thread Stats Avg Stdev Max +/- Stdev |
05 | Latency 425.64ms 80.53ms 925.03ms 76.88% |
06 | Req/Sec 117.03 22.13 255.00 81.30% |
07 | 350162 requests in 30.00s, 58.77MB read |
08 | Socket errors: connect 0, read 0, write 0, timeout 210 |
09 | Requests/sec: 11670.72 |
GOMAXPROCS = 8
Go standalone
3 | 100 threads and 5000 connections |
4 | Thread Stats Avg Stdev Max +/- Stdev |
5 | Latency 39.25ms 8.49ms 86.45ms 81.39% |
6 | Req/Sec 1.29k 129.27 1.79k 69.23% |
7 | 3837995 requests in 29.89s, 644.19MB read |
8 | Requests/sec: 128402.88 |
Nginx + Go through HTTP
3 | 100 threads and 5000 connections |
4 | Thread Stats Avg Stdev Max +/- Stdev |
5 | Latency 336.77ms 297.88ms 632.52ms 60.16% |
6 | Req/Sec 2.36k 2.99k 19.11k 84.83% |
7 | 2232068 requests in 29.98s, 374.64MB read |
Nginx + Go through FastCGI TCP
03 | 100 threads and 5000 connections |
04 | Thread Stats Avg Stdev Max +/- Stdev |
05 | Latency 217.69ms 121.22ms 1.80s 75.14% |
06 | Req/Sec 263.09 102.78 629.00 62.54% |
07 | 721027 requests in 30.01s, 121.02MB read |
08 | Socket errors: connect 0, read 0, write 176, timeout 1343 |
09 | Requests/sec: 24026.50 |
Nginx + Go through FastCGI Unix Socket
3 | 100 threads and 5000 connections |
4 | Thread Stats Avg Stdev Max +/- Stdev |
5 | Latency 694.32ms 332.27ms 1.79s 62.13% |
6 | Req/Sec 646.86 669.65 6.11k 87.80% |
7 | 909836 requests in 30.00s, 152.71MB read |
结论
第一组基准测试时一些Nginx的设置还没有很好的优化(启用gzip,Go的后端没有使用keep-alive连接)。当改为wrk以及按建议优化Nginx后结果有较大差异。
当GOMAXPROCS=1时,Nginx的开销不是那么大,但当OMAXPROCS=8时差异就很大了。以后可能会再试一下其他设置。如果你需要使用Nginx像虚拟主机,负载均衡,缓存等特性,使用HTTP proxy,别使用FastCGI。有些人说Go的FastCGI还没有被很好优化,这也许就是测试结果中巨大差异的原因。
来源:http://blog.csdn.net/typ2004/article/details/39482245