Linux TCP Tuning(Tcp优化)

转载 2007年09月20日 02:29:00
原贴:http://blog.chinajavaworld.com/entry.jspa?id=1182

Linux TCP Tuning
There are a lot of differences between Linux version 2.4 and 2.6, so first we'll cover the tuning issues that are the same in both 2.4 and 2.6. To change TCP settings in, you add the entries below to the file /etc/sysctl.conf, and then run "sysctl -p".

Like all operating systems, the default maximum Linux TCP buffer sizes are way too small. I suggest changing them to the following settings:

1
2
3
4
5
6
7
  # increase TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

Note: you should leave tcp_mem alone. The defaults are fine.

Another thing you can try that may help increase TCP throughput is to increase the size of the interface queue. To do this, do the following:
1
     ifconfig eth0 txqueuelen 1000 

I've seen increases in bandwidth of up to 8x by doing this on some long, fast paths. This is only a good idea for Gigabit Ethernet connected hosts, and may have other side effects such as uneven sharing between multiple streams.


--------------------------------------------------------------------------------
Linux 2.4
Starting with Linux 2.4, Linux has implemented a sender-side autotuning mechanism, so that setting the opitimal buffer size on the sender is not needed. This assumes you have set large buffers on the recieve side, as the sending buffer will not grow beyond the size of the recieve buffer.

However, Linux 2.4 has some other strange behavior that one needs to be aware of. For example: The value for ssthresh for a given path is cached in the routing table. This means that if a connection has has a retransmition and reduces its window, then all connections to that host for the next 10 minutes will use a reduced window size, and not even try to increase its window. The only way to disable this behavior is to do the following before all new connections (you must be root):

sysctl -w net.ipv4.route.flush=1
More information on various tuning parameters for Linux 2.4 are available in the Ipsysctl tutorial .


--------------------------------------------------------------------------------
Linux 2.6
Starting in Linux 2.6.7 (and back-ported to 2.4.27), BIC TCP is part of the kernel, and enabled by default. BIC TCP helps recover quickly from packet loss on high-speed WANs, and appears to work quite well. A BIC implementation bug was discovered, but this was fixed in Linux 2.6.11, so you should upgrade to this version or higher.

Linux 2.6 also includes and both send and receiver-side automatic buffer tuning (up to the maximum sizes specified above). There is also a setting to fix the ssthresh caching weirdness described above.

There are a couple additional sysctl settings for 2.6:
1
2
3
4
5
6
   # don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# recommended to increase this for 1000 BT or higher
net.core.netdev_max_backlog = 2500
# for 10 GigE, use this
# net.core.netdev_max_backlog = 30000

Starting with version 2.6.13, Linux supports pluggable congestion control algorithms . The congestion control algorithm used is set using the sysctl variable net.ipv4.tcp_congestion_control, which is set to Reno by default. (Apparently they decided that BIC was not quite ready for prime time.) The current set of congestion control options are:

reno: Traditional TCP used by almost all other OSes. (default)
bic: BIC-TCP
highspeed: HighSpeed TCP: Sally Floyd's suggested algorithm
htcp: Hamilton TCP
hybla: For satellite links
scalable: Scalable TCP
vegas: TCP Vegas
westwood: optimized for lossy networks
For very long fast paths, I suggest trying HTCP or BIC-TCP if Reno is not is not performing as desired. To set this, do the following:


sysctl -w net.ipv4.tcp_congestion_control=htcp
More information on each of these algorithms and some results can be found here .

Note: Linux 2.6.11 and under has a serious problem with certain Gigabit and 10 Gig ethernet drivers and NICs that support "tcp segmentation offload", such as the Intel e1000 and ixgb drivers, the Broadcom tg3, and the s2io 10 GigE drivers. This problem was fixed in version 2.6.12. A workaround for this problem is to use ethtool to disable segmentation offload:

ethtool -K eth0 tso off
This will reduce your overall performance, but will make TCP over LFNs far more stable.
More information on tuning parameters and defaults for Linux 2.6 are available in the file ip-sysctl.txt, which is part of the 2.6 source distribution.

And finally a warning for both 2.4 and 2.6: for very large BDP paths where the TCP window is > 20 MB, you are likely to hit the Linux SACK implementation problem. If Linux has too many packets in flight when it gets a SACK event, it takes too long to located the SACKed packet, and you get a TCP timeout and CWND goes back to 1 packet. Restricting the TCP buffer size to about 12 MB seems to avoid this problem, but clearly limits your total throughput. Another solution is to disable SACK.


--------------------------------------------------------------------------------
Linux 2.2
If you are still running Linux 2.2, upgrade! If this is not possible, add the following to /etc/rc.d/rc.local

echo 8388608 > /proc/sys/net/core/wmem_max
echo 8388608 > /proc/sys/net/core/rmem_max
echo 65536 > /proc/sys/net/core/rmem_default
echo 65536 > /proc/sys/net/core/wmem_default

 

相关文章推荐

nginx 实现tcp负载均衡

nginx常用来做http的反向代理,它默认是不支持tcp的,因此要使用nginx来实现tcp的反向代理必须用源码编译安装,并且在编译时安装tcp的扩展模块。    首先安装下载nginx:点击打开链...

CSDN日报20170706——《屌丝程序员的逆袭之旅》

为安静最终在那家公司找到了平衡,又申请带起了项目,时不时的也开始写一些代码,同时也带着一帮子兄弟赶项目,从朋友圈发布的动态来看,工作干的风风火火很带劲。

CSDN日报20170828——《4个方法快速打造你的阅读清单》

程序人生 | 4个方法快速打造你的阅读清单作者:foruok 下面提供四种方法,无论你是否经常读书,都可以使用它们快速构建起你的阅读清单: 从问题到图书 信息缺口 关联 跟...

Linux服务器调优

Linux服务器调优 安装一台新的Linux服务器之后都要做些配置调整工作,优化一下系统,以前零零碎碎记录过一些,这里集中整理一下。 Linux内核参数 http://space.i...

并发过10万的TCP请求 ,对Linux内核优化配置

以下Linux 系统内核优化配置均经在线业务系统测试,服务器运行状态良好,用了一些时间整理,现和大家分享一下,如有那位高人看到配置上有问题,请给与指出! # Controls the use o...

高负载高并发网站架构分析

由于自己正在做一个高性能大用户量的论坛程序,对高性能高并发服务器架构比较感兴趣,于是在网上收集了不少这方面的资料和大家分享。希望能和大家交流 msn: defender_ios@hotmail.c...

Linux TCP参数优化

一、前言 本文档针对OOP8生产环境,具体优化策略需要根据实际情况进行调整;本文档将在以下几个方面来阐述如何针对RedHat Enterprise Linux进行性能优化。 1)     Linu...

linux下修改内核参数进行Tcp性能调优 -- 高并发

前言: Tcp/ip协议对网络编程的重要性,进行过网络开发的人员都知道,我们所编写的网络程序除了硬件,结构等限制,通过修改Tcp/ip内核参数也能得到很大的性能提升, 下面就列举一些Tcp...

优化Linux下的内核TCP参数来提高服务器负载能力

提高服务器的负载能力,是一个永恒的话题。在一台服务器CPU和内存资源额定有限的情况下,最大的压榨服务器的性能,是最终的目的。要提高Linux系统下的负载能力,可以先启用Apache的Worker模式(...

linux下TCP/IP及内核参数优化调优(TIME_WAIT)

以下未经验证,需根据实际场景测试验证!!! linux系统下内核参数优化,参数配置得当可以大大提高系统的性能,也可以根据特定场景进行专门的优化,如TIME_WAIT过高,DDOS攻击等等。 ...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)