Linux Tuning


Host Tuning

    Background Information
    Linux
        Expert Guide
        Measurement Host Tuning
    Mac OSX
    FreeBSD
    MS Windows
    Other OS
    NIC Tuning
    Interrupt Binding
    Virtual Machines

Send us feedback about this page
Linux Tuning
Table of Contents

     General Approach
     TCP tuning
     UDP Tuning
     NIC Tuning
     Virtual Machine Tuning

This page contains a quick reference guide for Linux 2.6+ tuning for Data
Transfer hosts connected at speeds of 1Gbps or higher. Note that most of the
tuning settings described here will actually decrease performance of hosts
connected at rates of OC3 (155 Mbps) or less, such as home users on Cable/DSL
connections.

For a detailed explanation of some of the advice on this page, see the Linux
Tuning Expert page. Note that the settings on this page are not attempting to
achieve full 10G with a single flow. These settings assume you are using
tools that support parallel streams, or have multiple data transfers
occurring in parallel, and want to have fair sharing between the flows.

If you are trying to optimize for a single flow, see the tuning advice for
test/measurement hosts page.
 General Approach

To check what setting your system is using, use 'sysctl name' (e.g.: 'sysctl
net.ipv4.tcp_rmem'). To change a setting use 'sysctl -w'. To make the setting
permanent add the setting to the file 'sysctl.conf'.
Back to Top
 TCP tuning

Like most modern OSes, Linux now does a good job of auto-tuning the TCP
buffers, but the default maximum Linux TCP buffer sizes are still too small.
Here are some example sysctl.conf commands for different types of hosts.

For a host with a 10G NIC, optimized for network paths up to 100ms RTT, add
this to /etc/sysctl.conf

# increase TCP max buffer size settable using setsockopt()
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limit
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# increase the length of the processor input queue
net.core.netdev_max_backlog = 30000
# recommended default congestion control is htcp
net.ipv4.tcp_congestion_control=htcp
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1

Also add this to /etc/rc.local (where N is the number for your 10G NIC):

    /sbin/ifconfig ethN txqueuelen 10000

For a host with a 10G NIC optimized for network paths up to 200ms RTT, or a
40G NIC up on paths up to 50ms RTT:

# increase TCP max buffer size settable using setsockopt()
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
# increase Linux autotuning TCP buffer limit
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
# increase the length of the processor input queue
net.core.netdev_max_backlog = 250000
# recommended default congestion control is htcp
net.ipv4.tcp_congestion_control=htcp
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1

Also add this to /etc/rc.local (where N is the number for your 10G NIC):

    /sbin/ifconfig ethN txqueuelen 10000

Notes: you should leave net.tcp_mem alone, as the defaults are fine. A number
of performance experts say to also increase net.core.optmem_max to match net.
core.rmem_max and net.core.wmem_max, but we have not found that makes any
difference. Some experts also say to set net.ipv4.tcp_timestamps and net.ipv4.
tcp_sack to 0, as doing that reduces CPU load. We disagree with that
recommendation, as we have observed that the default value of 1 helps in more
cases than it hurts. But if you are extremely CPU bound you might want to
experiment with turning those off.

Linux supports pluggable congestion control algorithms. To get a list of
congestion control algorithms that are available in your kernel (kernal  2.6.
20+), run:

sysctl net.ipv4.tcp_available_congestion_control

If cubic and/or htcp are not listed try the following, as most distributions
include them as loadable kernel modules:

/sbin/modprobe tcp_htcp
/sbin/modprobe tcp_cubic

NOTE: There seem to be bugs in both bic and cubic for a number of versions of
the Linux kernel up to version 2.6.33. We recommend using htcp with older
kernels to be safe. To set the congestion control do:

sysctl -w net.ipv4.tcp_congestion_control=htcp

If you are using Jumbo Frames, we recommend setting tcp_mtu_probing = 1 to
help avoid the problem of MTU black holes. Setting it to 2 sometimes causes
performance problems.
Back to Top
 UDP Tuning

The TCP window size actually effects UDP as well on Linux. Be sure to use the
setsockopt() call to increase SO_SNDBUF/SO_RCVBUF to around 4MB if you want
to do UDP streams that are faster than 3-4 Gbps. The optimal data payload
size is the MTU size minus IP and UDP header size, or 1472 for a 1500 byte
MTU and 8972 for a 9000 byte MTU.
Back to Top
 NIC Tuning

This can be added to /etc/rc.local to be run at boot time:

    # increase txqueuelen for 10G NICS
    /sbin/ifconfig ethN txqueuelen 10000

Note that this might have adverse affects for a 10G host sending to a 1G host
or slower.

Back to Top
 Virtual Machine Tuning

 
Back to Top

Last edited: 2012-11-08 16:40:39




http://fasterdata.es.net/host-tuning/linux/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值