TCP/IP Problem Collector

Connect: Cannot assign requested address

出现原因:客户端机器没有空闲端口可以使用了,可能是客户端建立的大量的连接占用了所有可用的端口,或者是客户端主动close连接后的TIME_WAIT状态,导致一些端口还没被回收。(TCP/IP中port为16bit,范围是0~65535。)

查询、修改文件描述符限制:

# ulimit -n
1024            // 最多打开1024个文件描述符
# ulimit -n 1024000     // 增加可以打开的文件描述符

查询、修改可用端口:

# sysctl -a | grep port_range
net.ipv4.ip_local_port_range = 32768 60999 //端口可用范围是32768~60999

# vi /etc/sysctl.conf
net.ipv4.ip_local_port_range = 10000  65000 //意味着10000~65000端口可用

# sysctl -p  //使参数生效,不需要reboot

调低time_wait状态端口等待时间:

1. 调低端口释放后的等待时间,默认为60s,修改为15~30s
sysctl -w net.ipv4.tcp_fin_timeout=30
2. 修改tcp/ip协议配置, 通过配置/proc/sys/net/ipv4/tcp_tw_resue, 默认为0,修改为1,释放TIME_WAIT端口给新连接使用
sysctl -w net.ipv4.tcp_timestamps=1
3. 修改tcp/ip协议配置,快速回收socket资源,默认为0,修改为1
sysctl -w net.ipv4.tcp_tw_recycle=1

TIME_WAIT

http://blog.csdn.net/hguisu/article/details/10241519#t2

tcp_mem

http://www.aikaiyuan.com/10872.html

$ sysctl -A | grep tcp.*mem

tcp_mem(3个INTEGER变量, 其单位是页,1页等于4096字节):low, pressure, high

  • low:当TCP使用了低于该值的内存页面数时,TCP不会考虑释放内存。
  • pressure:当TCP使用了超过该值的内存页面数量时,TCP试图稳定其内存使用,进入pressure模式,当内存消耗低于low值时则退出pressure状态。
  • high:允许所有tcp sockets用于排队缓冲数据报的页面量,当内存占用超过此值,系统拒绝分配socket,后台日志输出“TCP: too many of orphaned sockets”。

recv buffer size

https://stackoverflow.com/questions/2862071/how-large-should-my-recv-buffer-be-when-calling-recv-in-the-socket-library

The answers to these questions vary depending on whether you are using a stream socket (SOCK_STREAM) or a datagram socket (SOCK_DGRAM) - within TCP/IP, the former corresponds to TCP and the latter to UDP.

How do you know how big to make the buffer passed to recv()?

SOCK_STREAM: It doesn’t really matter too much. If your protocol is a transactional / interactive one just pick a size that can hold the largest individual message / command you would reasonably expect (3000 is likely fine). If your protocol is transferring bulk data, then larger buffers can be more efficient - a good rule of thumb is around the same as the kernel receive buffer size of the socket (often something around 256kB).

SOCK_DGRAM: Use a buffer large enough to hold the biggest packet that your application-level protocol ever sends. If you’re using UDP, then in general your application-level protocol shouldn’t be sending packets larger than about 1400 bytes, because they’ll certainly need to be fragmented and reassembled.

What happens if recv gets a packet larger than the buffer?

SOCK_STREAM: The question doesn’t really make sense as put, because stream sockets don’t have a concept of packets - they’re just a continuous stream of bytes. If there’s more bytes available to read than your buffer has room for, then they’ll be queued by the OS and available for your next call to recv.

SOCK_DGRAM: The excess bytes are discarded.

How can I know if I have received the entire message?

SOCK_STREAM: You need to build some way of determining the end-of-message into your application-level protocol. Commonly this is either a length prefix (starting each message with the length of the message) or an end-of-message delimiter (which might just be a newline in a text-based protocol, for example). A third, lesser-used, option is to mandate a fixed size for each message. Combinations of these options are also possible - for example, a fixed-size header that includes a length value.

SOCK_DGRAM: An single recv call always returns a single datagram.

Is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space?

No. However, you can try to resize the buffer using realloc() (if it was originally allocated with malloc() or calloc(), that is).

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值