Tuning NGINX for Performance nginx性能调优

from: http://nginx.com/blog/tuning-nginx/

NGINX is well known as a high performance load balancercache, and web server, powering over 40% of the busiest websites in the world.  Most of the default NGINX and Linux settings work well for most use cases, but it can be necessary to do some tuning to achieve optimal performance.  This blog post will discuss some of the NGINX and Linux settings to consider when tuning a system.  There are many settings available, but for this post we will cover the few settings recommended for most users to consider adjusting.  The settings not covered in this post are ones that should only be considered by those with a deep understanding of NGINX and Linux, or after a recommendation by the NGINX support or professional services teams.  NGINX professional services has worked with some of the world’s busiest websites to tune NGINX to get the maximum level of performance and are available to work with any customer who needs to get the most out of their system.

Introduction

A basic understanding of the NGINX architecture and configuration concepts is assumed.  This post does not attempt to duplicate the NGINX documentation, but provides an overview of the various options with links to the relevant documentation.

A good rule to follow when tuning is to change one setting at a time, and set it back to the default value if it does not result in a positive change in performance.

We will start with a discussion of Linux tuning since some of these values can impact some of the values you will use for your NGINX configuration.

Linux Configuration

Modern Linux kernels (2.6+) do a good job in sizing the various settings but there are some settings that you may want to change.  If the operation system settings are too low, error meesages in the kernel log help indicate that you need to adjust them.  There are many possible Linux settings but we will cover those settings that are most likely in need of tuning for normal workloads.  Please refer to Linux documentation for details on adjusting these settings.

The Backlog Queue

The following settings relate directly to connections and how they are queued.  If you have high rate of incoming connections and you are getting uneven levels of performance (for example some connections appear to be stalling), then changing these settings can help.

net.core.somaxconn: The size of the queue for connections waiting for acceptance by NGINX. NGINX accepts connections very quickly, so this value generally does not usually need to be very large and the default can be very low, but increasing can be a good idea if your website experiences heavy traffic. Error messages in the kernel log indicate that the value is too small; increase it until the errors stop.  Note: if you set this to a value greater than 512, change the backlog parameter to the listent directive in the NGINX configuration to match.

net.core.netdev_max_backlog: The rate at which packets are buffered by the network card before being handed off to the CPU.  For machines with a high amount of bandwidth, it might need to increased.  Check the kernel log for errors related to this setting, and consult the network card documentation for advice on changing it.

File Descriptors

File descriptors are operating system resources used to handle things such as connections and open files.  NGINX can use up to two file descriptors per connection. For example, if it is proxying, there is generally one file descriptor for the client connection and another for the connection to the proxied server, though this ratio is much lower if HTTP keepalives are used. For a system serving a large number of connections, these settings may need to be adjusted:

sys.fs.file_max: The system wide limit for file descriptors.

nofile: The user file descriptor limit, set in the /etc/security/limits.conf file.

Ephemeral Ports

When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral port.

net.ipv4.ip_local_port_range: The start and end of the range of port values.  If you see that you are running out of ports, you can increase this range. A common setting is ports 1024 to 65000.

net.ipv4.tcp_fin_timeout: The time a port must be inactive before it can reused for another connection. The default is often 60 seconds, which can usually be safely reduced to 30 or even 15 seconds.

 

NGINX Configuration

The following are some NGINX directives that can impact performance.  As stated above, we will only be discussing those directives that we recommend most users look at adjusting.  Any directive not mentioned here is one that we recommend not to be changed without direction from the Nginx team.

Worker Processes

NGINX can run multiple worker processes, each capable of processing a large number of connections. You can control how many worker processes are run and how connections are handled with the following directives:

worker_processes: The number of NGINX worker processes. In most cases, running one worker process per CPU core works well. This can be achieved by setting this directive to “auto”. There are times when you may want to increase this number, such as when the work processes have to do a lot of disk I/O.  The default is 1.

worker_connections: The maximum number of connections that can be processed at one time by each worker process. The default is 512, but most systems can handle a larger number.   The appropriate setting depends on the size of the server and the nature of the traffic, and can be discovered through testing.

Keepalives

Keepalive connections can have a major impact on performance by reducing the CPU and network overhead needed for opening and closing connections.  NGINX terminates all client connections and has separate and independent connections to the upstream servers. NGINX supports keepalives for the client and upstream servers. The following directives deal with client keepalives:

keepalive_requests: The number of requests a client can make over a single keepalive connection. The default is 100, but a much higher value can be especially useful for testing when the load generating tool is sending many requests from a single client.

keepalive_timeout: How long an idle keepalive connection remains open.

The following directives deal with upstream keepalives:

keepalive: The number of idle keepalive connections to an upstream server that remain open for each worker process.  There is no default value.

To enable keepalive connections to the upstream you must add the following directives:

proxy_http_version 1.1;
proxy_set_header Connection “”;

Access Logging

Logging every request takes both CPU and I/O cycles, and one way to reduce the impact is to enable access log buffering. This causes NGINX to buffer a series of log entries and write them to the file together instead with a separate write operation for each. Access log buffering is enabled by setting the buffer size with the buffer=size option to the access_logdirective. You can tell NGINX to write the entries in the buffer after a specified amount of time with the flush=time. With these two options included, NGINX writes entries to the log file when the next log entry will not fit into the buffer or the entries in the buffer are older than the specified time, respectively. Log entries are also written when a worker process is reopening log files or is shutting down. It is also possible to disable access logging completely.

Sendfile

Sendfile is an operating system feature that can be enabled on NGINX. It can enable faster TCP data transfers by doing in-kernel copying of data from one file descriptor to another, often achieving zero-copy. NGINX can use it to write cached or on-disk content down a socket, without any context switching to user space, making it extremely fast and using less CPU overhead. Because the data never touches user space, it’s not possible to insert filters that need to access the data into the processing chain, so you cannot use any of the NGINX filters that change the content, for example the gzip filter.  It is disabled by default.

Limits

NGINX and NGINX Plus allow you to set various limits that help to prevent clients from consuming too many resources, which can adversely the performance of your system as well as user experience and security.  The following are some of these directives:

limit_conn/limit_conn_zone:  Limit the number of connections NGINX allows, for example from a single client IP address. Setting them can help prevent individual clients from opening too many connections and consuming too many resources.

limit_rate: Limit the amount of bandwidth allowed for a client on a single connection. Setting it can prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service.

limit_req/limit_req_zone: Limit the rate of requests being processed by NGINX. As with limit_rate, setting them can help prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service. They can also be used to improve security, especially for login pages, by limiting the requests rate so that it is adequate for a human user but too slow for programs trying to access your application.

max_conns: For a server in an upstream group, set the maximum number of simultaneous connections it accepts. This can help prevent the upstream servers from being overloaded. The default is zero, meaning that there is no limit.

queue: If max_conns is set for any upstream servers, governs what happens when a request cannot be processed because there are no available servers in the upstream group and some of those servers have reached themax_conns limit. This directive can be set to the number of requests to queue and for how long.  If this directive is not set, no queueing occurs.

 

Additional Considerations

Some additional features of NGINX that can be used to increase the performance of a web application don’t really fall under the heading of tuning, but are worth mentioning because their impact can be considerable.  We will discuss two of these features.

Caching

By enabling caching on an NGINX instance that is load balancing a set of web or application servers, you can dramatically improve the response time to clients while at the same time dramatically reducing the load on the backend servers. Caching is a subject of its own and will not be covered here. For information, see NGINX Content Caching in the NGINX Admin Guide.

Compression

Compressing responses to clients can greatly reduce their size, requiring less bandwidth. Because compressing data consumes CPU resources, it is most useful when there is value to reducing bandwidth usage. It is important to note that you should not enable compression for objects that are already compressed, such as JPEG files. For more information, see Compression and Decompression in the NGINX Admin Guide.


  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值