Linux TCP/IP performance tune

On site problem:
--
I have written a simple program which recursively traverses the file system (NFS/CIFS or local file system) to get meta data of each file/directory. While I launched two instances of the program to scan two separate NFS which are exported from different NAS filers. I am seeing that one of the scanner is running more quickly than the other (number of files scanned per minute). The test is running on a box which has 32 CPUs and 32 GB memory and running RHEL5 X64 OS. 

PS: 
1) the NFS exports themselves have no load problem, what I mean is that sometimes the scan for one NFS is more fast than the other, and sometimes the situation reverses. 


2) And from the "top" output, I can see one instance is consuming absolutely more CPU time (TIME+ column) than the other (the two instances run at almost the same time), but both are less than 10 % CPU load. I am really confused WHY this happened. If the scheduler has fair strategy, both instances should consume equal CPU time ?

3) If I run the program one after one, both run almost equally quick.

4) I suspect it is related to the network saturation, since I used IPTraf to monitor the network bandwidth, the network is far from saturation.

5) I failed to reproduced this issue in another Linux box.


Triage process
1) This is absolutely not a CPU bound issue.
2) This is almost a network issue (I/O).
3) iptraf to monitor the packets/tcp flow


Tune TCP/IP parameters:
# echo 'net.core.wmem_max=12582912' >> /etc/sysctl.conf
# echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf
# echo 'net.ipv4.tcp_rmem= 10240 87380 12582912' >> /etc/sysctl.conf
# echo 'net.ipv4.tcp_wmem= 10240 87380 12582912' >> /etc/sysctl.conf
# echo 'net.ipv4.tcp_window_scaling = 1' >> /etc/sysctl.conf
# echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf
# sysctl -p

View the chanages:
# tcpdump -ni eth0
$ cat /proc/sys/net/ipv4/tcp_mem

The default and maximum amount for the receive socket memory:
$ cat /proc/sys/net/core/rmem_default
$ cat /proc/sys/net/core/rmem_max

The default and maximum amount for the send socket memory:
$ cat /proc/sys/net/core/wmem_default

After tuning, it gets twice performance gain.

Reference:
http://www.cyberciti.biz/files/linux-kernel/Documentation/networking/ip-sysctl.txt
http://www.ibm.com/developerworks/linux/library/l-hisock/index.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值