用tc实现DiffServ的流控

 

Linux Qdisc performance

Submitted by jlynch
on July 14, 2005 - 12:53pm

Im using a Linux machine with standard pc hardware with 3 seperate PCI network interfaces to operate as a DiffServ core router using Linux traffic control. The machine is a P4 2.8ghz, 512mb RAM running fedora core 3 with the 2.6.10 kernel. All links and network interfaces are full duplex fast ethernet. IP forwarding is enabled in the kernel. All hosts on the network have their time sychronised using a stratum 1 server on the same VLAN. Below is a ascii diagram of the network.

(network A) edge router ------>core router---->edge router (network C)
                                    ^
                                    |
                                    |
                               edge router
                               (network B) 

Core Router Configuration:
---------------------------
The core router implements the expedited forwarding PHB. I have tried 2 Different Configurations.
1. HTB qdisc with two htb classes. One which services VoIP traffic (marked with EF codepoint) VoIP traffic is guaranteed to serviced at a minimum rate of 1500 kbit. This htb class is serviced by a fifo queue with a limit of 5 packets. The 2nd htb class guarantees all other traffic to serviced at a minimum rate of 5mbit. The RED qdisc services this htb class.

2. PRIO qdisc with token a bucket filter to service VoIP traffic (marked with EF codepoint) VoIP traffic with a guaranteed minimum rate of 1500 kbit. A RED qdisc to service all other traffic.

Test 1.
---------------------------
VoIP traffic originates from network A and is destined to network C. The throughput of VoIP traffic is 350 kbit. No other traffic passes through the core router during this time. These Voip packets are marked with the EF codepoint. Using either of the above mentioned configurations for the core router, the delay of the VoIP traffic in travelling from network A to network C passing through the core router is 0.25 milliseconds.

Test 2.
---------------------------
Again VoIP traffic originates from network A and is destined to netwotk C with a throughput of 350 kbit. TCP traffic also originates from another host in network A and is destined for another host in network C. More TCP traffic originates from network B and is destined to network C. This TCP traffic is from transfering large files through http. As a result a bottleneck is created at the outgoing interface of the core router to network C. The combined TCP traffic from these sources is nearly 100 mbit. Using either of the above mentioned configurations for the core router, the delay of the VoIP traffic in travelling from network A to network C passing through the core router is 30ms milliseconds with 0% loss. There is a considerable amount of TCP packets dropped.

Could anyone tell me why the delay is so high (30ms) for VoIP packets which are treated with the EF phb when the outgoing interface of core router to network c is saturated ?

Is it due to operating system factors ?
Has anyone else had similar experiences ?

Also I would appreciate if anyone could give me performace metrics as to approximately how many packets per second a router running Linux with standard pc hardware can forward. Or even mention any factors that would affect this performance. Im assume the system interrupt frequncy HZ will affect performance in some way.

Jonathan Lynch

-----------------------------------------------------------------------------------------------
In case anyone suggests I should post to the LARTC mailing list, I already have and havnt got a response and that was over a week ago. The config I used for each setup is included below. These are slight modifications that are supplied with iproute2 source code.

Config 1 using htb
-------------------
tc qdisc add dev $1 handle 1:0 root dsmark indices 64 set_tc_index
tc filter add dev $1 parent 1:0 protocol ip prio 1 tcindex mask 0xfc shift 2

Main htb qdisc & class
tc qdisc add dev $1 parent 1:0 handle 2:0 htb
tc class add dev $1 parent 2:0 classid 2:1 htb rate 100Mbit ceil 100Mbit

EF Class (2:10)
tc class add dev $1 parent 2:1 classid 2:10 htb rate 1500Kbit ceil 100Mbit
tc qdisc add dev $1 parent 2:10 pfifo limit 5
tc filter add dev $1 parent 2:0 protocol ip prio 1 handle 0x2e tcindex classid 2:10 pass_on

BE Class (2:20)
tc class add dev $1 parent 2:1 classid 2:20 htb rate 5Mbit ceil 100Mbit
tc qdisc add dev $1 parent 2:20 red limit 60KB min 15KB max 45KB burst 20 avpkt 1000 bandwidth 100Mbit probability 0.4
tc filter add dev $1 parent 2:0 protocol ip prio 2 handle 0 tcindex mask 0 classid 2:20 pass_on

Config 2 using PRIO
-------------------
Main dsmark & classifier
tc qdisc add dev $1 handle 1:0 root dsmark indices 64 set_tc_index
tc filter add dev $1 parent 1:0 protocol ip prio 1 tcindex mask 0xfc shift 2

Main prio queue
tc qdisc add dev $1 parent 1:0 handle 2:0 prio
tc qdisc add dev $1 parent 2:1 tbf rate 1.5Mbit burst 1.5kB limit 1.6kB
tc filter add dev $1 parent 2:0 protocol ip prio 1 handle 0x2e tcindex classid 2:1 pass_on

BE class(2:2)
tc qdisc add dev $1 parent 2:2 red limit 60KB min 15KB max 45KB burst 20 avpkt 1000 bandwidth 100Mbit probability 0.4
tc filter add dev $1 parent 2:0 protocol ip prio 2 handle 0 tcindex mask 0 classid 2:2 pass_on

 
注:
在这里,通过对tos打标识,可以把流分为EF、AF、和BE,通过对不同的流归类到不同的队列里,队列qidsc使用不同的类型,从而达到diffserv效果。
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值