linux段大小,如何在Linux上設置最大TCP最大段大小?

In Linux, how do you set the maximum segment size that is allowed on a TCP connection? I need to set this for an application I did not write (so I cannot use setsockopt to do it). I need to set this ABOVE the mtu in the network stack.

在Linux中,如何設置TCP連接上允許的最大段大小?我需要為我沒寫的應用程序設置它(所以我不能使用setsockopt來做)。我需要將此設置為網絡堆棧中的mtu。

I have two streams sharing the same network connection. One sends small packets periodically, which need absolute minimum latency. The other sends tons of data--I am using SCP to simulate that link.

我有兩個流共享相同的網絡連接。一個周期性地發送小數據包,這需要絕對最小延遲。另一個發送大量數據 - 我正在使用SCP來模擬該鏈接。

I have setup traffic control (tc) to give the minimum latency traffic high priority. The problem I am running into, though, is that the TCP packets that are coming down from SCP end up with sizes up to 64K bytes. Yes, these are broken into smaller packets based on mtu, but this unfortunately occurs AFTER tc prioritizes the packets. Thus, my low latency packet gets stuck behind up to 64K bytes of SCP traffic.

我已設置流量控制(tc)以使最小延遲流量具有高優先級。但是,我遇到的問題是從SCP下來的TCP數據包最終會出現64K字節的大小。是的,這些基於mtu被分成更小的數據包,但遺憾的是,在tc優先化數據包之后。因此,我的低延遲數據包被卡在高達64K字節的SCP流量之后。

This article indicates that on Windows you can set this value.

本文指出在Windows上您可以設置此值。

Is there something on Linux I can set? I've tried ip route and iptables, but these are applied too low in the network stack. I need to limit the TCP packet size before tc, so it can prioritize the high priority packets appropriately.

我可以在Linux上設置一些東西嗎?我嘗試過ip route和iptables,但這些在網絡堆棧中應用得太低了。我需要在tc之前限制TCP數據包大小,因此它可以適當地優先處理高優先級數據包。

5 个解决方案

#1

6

Are you using tcp segmentation offload to the nic? (You can use "ethtool -k $your_network_device" to see the offload settings.) This is the only way as far as I know that you would see 64k tcp packets with a device MTU of 1500. Not that this answers the question, but it might help avoid misdiagnosis.

您是否正在使用tcp分段卸載到nic? (您可以使用“ethtool -k $ your_network_device”來查看卸載設置。)據我所知,這是唯一的方法,您可以看到設備MTU為1500的64k tcp數據包。不是這個回答了問題,但是它可能有助於避免誤診。

#2

2

The upper bound of the advertised TCP MSS is the MTU of the first hop route. If you're seeing 64k segments, that tends to indicate that the first hop route MTU is excessively large - are you using loopback or something for testing?

廣播的TCP MSS的上限是第一跳路由的MTU。如果您看到64k段,這往往表明第一跳路由MTU過大 - 您是使用環回還是進行測試?

#3

1

ip route command with option advmss helps to set MSS value.

帶有選項advmss的ip route命令有助於設置MSS值。

ip route add 192.168.1.0/24 dev eth0 advmss 1500

#4

0

You are definitely misdiagnosing the problem; as someone else pointed out, tc doesn't see TCP packets, it sees IP packets, and they'd already be in chunks at that point.

你肯定是在誤解這個問題;正如其他人所指出的,tc沒有看到TCP數據包,它看到了IP數據包,並且它們已經在那時已經處於塊狀態。

You are probably just experiencing bufferbloat: you're overloading your outbound queue in a totally separate device (probably a DSL modem or cable modem). The only fix is to tell tc to limit your outbound bandwidth to less than the modem's bandwidth, eg. using TBF.

您可能只是遇到緩沖區:您在一個完全獨立的設備(可能是DSL調制解調器或電纜調制解調器)中超載出站隊列。唯一的解決方法是告訴tc將出站帶寬限制為低於調制解調器的帶寬,例如。使用TBF。

#5

0

MSS = MTU – 40bytes (standard TCP/IP overhead of 40 bytes [20+20])

MSS = MTU - 40字節(標准TCP / IP開銷為40字節[20 + 20])

If the MTU is 1500 bytes then the MSS will be 1460 bytes.

如果MTU是1500字節,那么MSS將是1460字節。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值