版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明 (http://blog.csdn.net/quqi99)
问题
遇到一个问题,libvirt里设置outbound(对于虚机来说就是入口流量)后却看不到相应的tc ingress rules。
Linux TC
Linux TC用来控制QoS,TC包括三个部分: 队列规定qdisc(queueing discipline )、类(class)和过滤器(filter)。filter -> class -> queue
队列(queueing discipline)。qdisc的次号码永远是0,如1:或10:, class的次号码不能是0,如1:1。例如:
tc qdisc del dev eth0 root
# 两种qdisc,CBQ与HTB。CBQ比较复杂,HTB是改良版,一般优先使用更好的HTB。其中'default 2'表示不满足任何已设定的filter的流量默认归入class 1:2
# htb可以保证每个类型的带宽,但是也允许特定的类可以突破带宽上限,占用别的类的带宽。
tc qdisc add dev eth0 root handle 1: htb default 2
# 设置总上传宽带,此处还可以为各种流量设置更多的分类。
# 另外,大队列有助于改善丢包提升速度,所以ISP一般采用大队列,但大队列会破坏交互性流量。光猫处的队列无法更改,将其挪到此Linux路由器中。
# rate参数表示一个class保证得到的带宽值,prio参数表示借用带宽时的优先级,ceil参数表示一个class能得到的最大带宽值
tc class add dev eth0 parent 1: classid 1:1 htb rate 220kbit burst 6k
# htb队列规定下一般再挂sfq随机公平队列,以确保在该队列中的大流量不会发生饥饿。
tc qdisc add dev eth0 parent 1:1 handle 10: sfq perturb 10
# tc过滤器有两种,u32与fw,fw是靠iptables给封包贴标签,避免了u32去理解复杂的封包结构。
tc filter add dev eth0 parent 1: protocol ip prio handle 1 fw classid 1:1
tc -s class show dev eth0
# iptables给封包贴标签
iptables -t mangle -A PREROUTING -p icmp -j MARK --set-mark 0x1
iptables -t mangle -A PREROUTING -p icmp -j RETURN
限制下载速率(即入口流量)
tc qdisc del dev $DEV ingress
tc qdisc add dev $DEV handle ffff: ingress
# fileter everything to it, drop everthing that's coming in too fast
tc filter add dev vnet1 parent ffff: protocol all u32 match u32 0 0 police rate 122kbit burst 10k mtu 64kb drop flowid :1
OpenStack QoS based on InstanceResourceQuota
可在openstack中直接运行命令“nova flavor-key m1.small set quota:vif_outbound_average=20”(出libvirt的outbound对于VM来说是inbound入口下载流量), 因为openstack生成了linux port,所以libvirt将产生如下配置:
<interface type='bridge'>
<mac address='fa:16:3e:db:83:fe'/>
<source bridge='qbr3869c552-8b'/>
<bandwidth>
<outbound average='20'/>
</bandwidth>
<target dev='tap3869c552-8b'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
产生了如下tc rules:
qdisc pfifo_fast 0: dev tap3869c552-8b root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc ingress ffff: dev tap3869c552-8b parent ffff:fff1 ----------------
对于虚机的上传流量(出虚机的outbound流量),需设置libvirt的inbound(nova flavor-key m1.small set quota:vif_inbound_average=20),它将产生下列的tc rules:
qdisc htb 1: dev tapb1afdddf-ba root refcnt 2 r2q 10 default 1 direct_packets_stat 0 direct_qlen 500
qdisc sfq 2: dev tapb1afdddf-ba parent 1:1 limit 127p quantum 1514b depth 127 divisor 1024 perturb 10sec
Libvirt’s OVS QoS
1, 设置让qemu支持ovs port
sudo apt-get install qemu-system qemu-kvm virtinst libvirt-bin openvswitch-datapath-source openvswitch-controller openvswitch-switch virt-top virt-manager python-libvirt
sudo ovs-vsctl add-br br-phy
sudo virsh net-destroy default
sudo virsh net-edit default
<network>
<name>br-phy</name>
<forward mode='bridge'/>
<bridge name='br-phy'/>
<virtualport type='openvswitch'/>
</network>
sudo virsh net-undefine default
sudo virsh net-autostart br-phy
2, 使用"virsh edit “在<interface元素下添加下列配置,然后使用"virsh destroy” and "virsh start"命令重启虚机让配置生效
<source network='br-phy'/>
<virtualport type='openvswitch'>
最终配置看起来像:
<interface type='network'>
<mac address='52:54:00:ae:17:05'/>
<source network='br-phy'/>
<virtualport type='openvswitch'>
<parameters interfaceid='86814ca6-615b-41cd-8d85-4873638d1b66'/>
</virtualport>
<bandwidth>
<outbound average='2048'/>
</bandwidth>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
3, 使用"tc qdisc show"命令查看如下。
qdisc pfifo_fast 0: dev vnet1 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
为什么找不着’qdisc ingress ffff:'相关的配置呢? 下面的内容将解决这个问题。
OVS QoS
在OVS中,配置虚机的inbound下载流量,使用如下命令:
root@node1:~# tc qdisc list |grep vnet1
root@node1:~# ovs-vsctl set interface vnet1 ingress_policing_rate=8 ingress_policing_burst=2
root@node1:~# tc qdisc show |grep vnet1
qdisc pfifo_fast 0: dev vnet1 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc ingress ffff: dev vnet1 parent ffff:fff1 ----------------
root@node1:~# sudo ovs-vsctl set interface vnet1 ingress_policing_rate=0 ingress_policing_burst=0
root@node1:~# sudo ovs-vsctl list interface vnet1
OVS中配置QoS的代码流程如下:
ovs-vswitchd.c
main-->bridge_run-->bridge_reconfigure-->iface_configure_qos
netdev-linux.c中的L4707行(tc_add_policer)用于配置配置限速速率,类似于命令:
tc filter add dev <devname> parent ffff: protocol all prio 49 basic police rate <kbits_rate>kbit burst <kbits_burst>k mtu 65535 drop
Libvirt QoS
http://libvirt.org/formatnetwork.html#elementQoS
- 在libvirt中,virNetDevBandwidthSet()支持直接使用tc设置设置ingress rules (https://github.com/libvirt/libvirt/blob/v1.2.2-maint/src/util/virnetdevbandwidth.c#L224)
- 在ovs中,netdev_set_policing()支持使用ovs自己的方式(ovs-vsctl set interface vnet1 ingress_policing_rate=8 ingress_policing_burst=2)设置ingress rules(https://github.com/openvswitch/ovs/blob/v2.6.1/vswitchd/bridge.c#L4512),但是它会先删除libvirt中使用tc直接创建的qos设置(这个链接里的tc_add_del_ingress_qdisc方法 - https://github.com/openvswitch/ovs/blob/master/lib/netdev-linux.c#L2132),然后基于ovs的方式再创建QoS。但libvirt中的virnetdevopenvswitch.c也没有调用ovs qos相关的命令来创建qos,所以这个问题这产生了。
- Neutron QoSaaS支持调用ovs qos命令去给ovs port设置ingress流量($neutron/agent/common/ovs_lib.py#_set_egress_bw_limit_for_port()), 这个可以作为libvirt不支持ovs port qos特性的替代。
gdb联合调试libvirtd与vswitchd找原因
注:gdb debug时最好不要源码编译包,最好使用dbg包。一定要源码编译也最好不要使用诸如’–prefix=/usr’之类的变更默认的安装路径。
因为今后换回debian包安装时,会遇到众多包路径混乱造成的模块依赖之类的奇奇怪怪的问题。解决这些问题,除了删除包,还要使用这两个命令做清理:
sudo rm -rf /usr/local/lib/libvirt* & sudo rm -rf /usr/local/lib/systemd/system/libvirt*
1, The first step, use 'sudo virsh start xenial' to trigger debugging process.
hua@node1:~$ sudo virsh start xenial
2, Then ovs-vswitchd will create ingress tc rules in #L375 and #L385 of libvirtd's virnetdevbandwidth.c (This is code: https://github.com/libvirt/libvirt/blob/v1.2.2-maint/src/util/virnetdevbandwidth.c#L224)
hua@node1:~$ sudo gdb -p `pidof libvirtd`
.
.
.
(gdb) c
Continuing.
[Switching to Thread 0x7f96d2c9d700 (LWP 7196)]
Thread 4 "libvirtd" hit Breakpoint 1, virNetDevBandwidthSet (ifname=0x7f96b8006b30 "vnet0", bandwidth=bandwidth@entry=0x7f96b8003f20,
hierarchical_class=hierarchical_class@entry=false, swapped=true) at util/virnetdevbandwidth.c:200
200 {
(gdb) info b
Num Type Disp Enb Address What
1 breakpoint keep y 0x00007f96d936ac60 in virNetDevBandwidthSet at util/virnetdevbandwidth.c:200
breakpoint already hit 4 times
(gdb) c
Continuing.
After running #L375 and #L385, we can see ingress tc rules have been created by using the command 'tc qdisc show |grep vnet0'.
hua@node1:~$ tc qdisc show |grep vnet0
qdisc pfifo_fast 0: dev vnet0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc ingress ffff: dev vnet0 parent ffff:fff1 ----------------
3, Then ovs-vswitchd will stop at the break lib/netdev-linux.c:2132 we set.
hua@node1:~$ sudo gdb -p `pidof ovs-vswitchd`
.
.
.
(gdb) c
Continuing.
[Thread 0x7fbf6ca34940 (LWP 27610) exited]
Thread 1 "ovs-vswitchd" hit Breakpoint 4, netdev_linux_set_policing (netdev_=0x1517930, kbits_rate=0, kbits_burst=0)
at lib/netdev-linux.c:2132
2132 error = tc_add_del_ingress_qdisc(ifindex, false);
(gdb) info b
Num Type Disp Enb Address What
4 breakpoint keep y 0x00000000005aced2 in netdev_linux_set_policing at lib/netdev-linux.c:2132
breakpoint already hit 5 times
After running lib/netdev-linux.c:2132 (https://github.com/openvswitch/ovs/blob/master/lib/netdev-linux.c#L2132), we can see ingress tc rules are been deleted.
(gdb) p kbits_rate
$5 = 0
(gdb) n
2133 if (error) {
(gdb) n
[New Thread 0x7fbf6ca34940 (LWP 27739)]
2139 if (kbits_rate) {
(gdb) n
[Thread 0x7fbf6ca34940 (LWP 27739) exited]
2155 netdev->kbits_rate = kbits_rate;
hua@node1:~$ tc qdisc show |grep vnet0
qdisc pfifo_fast 0: dev vnet0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
4, Because we don't use ovs's way to configure ingress setting, tc_add_policer(netdev_, kbits_rate, kbits_burst) in #L2147 will not be run (https://github.com/openvswitch/ovs/blob/master/lib/netdev-linux.c#L2147) so that the problem occurs. This is a ovs's limitation.
附录 - OVS QoS相关命令演示
root@node1:~# ovs-appctl -t ovs-vswitchd qos/show-types vnet0
QoS type: linux-fq_codel
QoS type: linux-codel
QoS type: linux-hfsc
QoS type: linux-noop
QoS type: linux-sfq
QoS type: linux-htb
root@node1:~# ovs-vsctl set interface vnet0 ingress_policing_rate=8 ingress_policing_burst=2
root@node1:~# ovs-vsctl list interface vnet0 |grep ingress
ingress_policing_burst: 2
ingress_policing_rate: 8
root@node1:~# ovs-vsctl set port vnet0 qos=@newqos -- --id=@newqos create qos type=linux-noop
2a2ed08d-7b3e-4b03-bb7e-a297b5bdf2a7
root@node1:~# ovs-vsctl list qos
_uuid : 2a2ed08d-7b3e-4b03-bb7e-a297b5bdf2a7
external_ids : {}
other_config : {}
queues : {}
type : linux-noop
root@node1:~# ovs-vsctl list port vnet0 |grep qos
qos : 2a2ed08d-7b3e-4b03-bb7e-a297b5bdf2a7
root@node1:~# ovs-vsctl --all destroy qos
root@node1:~# ovs-vsctl destroy qos 5adaccad-fbe8-49e3-908a-742cab85ce95
root@node1:~# ovs-vsctl list queue
测试Neutron QoSaaS
我们知道了,Neutron QoSaaS为OVS实现了ingress驱动,让我们测试它:
1, 安装QoSaaS,运行命令“juju config neutron-api enable-qos=True”将自动产生下列配置:
ubuntu@zhhuabj-bastion:~$ juju ssh neutron-api/0 -- sudo grep -r 'qos' /etc/neutron/neutron.conf
service_plugins = router,firewall,vpnaas,metering,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,qos
ubuntu@zhhuabj-bastion:~$ juju ssh neutron-api/0 -- sudo grep -r 'qos' /etc/neutron/plugins/ml2/ml2_conf.ini -B 1
[ml2]
extension_drivers=qos
ubuntu@zhhuabj-bastion:~$ juju ssh neutron-gateway/0 -- sudo grep -r 'qos' /etc/neutron/plugins/ml2/openvswitch_agent.ini -B 1
[agent]
extensions = qos
ubuntu@zhhuabj-bastion:~$ juju ssh nova-compute/0 -- sudo grep -r 'qos' /etc/neutron/plugins/ml2/openvswitch_agent.ini -B 1
[agent]
extensions = qos
2, Neutron自Liberty提供了这种驱动,如下的输出知道它支持两种,一种是通过bandwidth_limit为TCP流量限流,一种是通过dscp为IP流量限流.
ubuntu@zhhuabj-bastion:~$ neutron qos-available-rule-types
+-----------------+
| type |
+-----------------+
| dscp_marking |
| bandwidth_limit |
+-----------------+
使用dscp的方法如下:
neutron qos-policy-create dscp-marking
neutron qos-dscp-marking-rule-create dscp-marking --dscp-mark 26
neutron port-update 750901a3-70b3-4907-a52a-0025fac9d6c1 --qos-policy dscp-marking
3, 本文主要是测试ovs ingress qos, 所以做如下配置:
neutron qos-policy-create egress-qos-policy
neutron qos-bandwidth-limit-rule-create egress-qos-policy --max-kbps 300 --max-burst-kbps 30 --egress
neutron qos-policy-list
neutron port-update --qos-policy egress-qos-policy 8c8b9944-c9e1-4343-89e5-03f77c2e058d
#neutron port-update --no-qos-policy 8c8b9944-c9e1-4343-89e5-03f77c2e058d
#neutron port-create <port-name> --qos-policy-id egress-qos-policy
注意, 也可以针对net级别, 但tc rules仍然是设置在qvo接口上.
neutron net-update <net-id> --qos-policy egress-qos-policy
4, 验证,neutron qosaas服务将根据上述设置生成下列qos设置:
root@juju-7e3a3f-xenial-mitaka-qos-8:~# sudo ovs-vsctl list interface qvo239cf73e-7e |grep ingress
ingress_policing_burst: 30
ingress_policing_rate: 300
root@juju-7e3a3f-xenial-mitaka-qos-8:~# tc qdisc show |grep qvo239cf73e-7e
qdisc noqueue 0: dev qvo239cf73e-7e root refcnt 2
qdisc ingress ffff: dev qvo239cf73e-7e parent ffff:fff1 ----------------
5, 其他, ingress qos (针对虚机是ingress即下载, 对交换机而言是egress)
openstack network qos policy create ingress-qos-policy
openstack network qos rule create --type bandwidth-limit --max-kbps 300 --max-burst-kbits 30 --ingress ingress-qos-policy
neutron port-update --qos-policy ingress-qos-policy 19d440ef-5f27-4c26-8dc6-c994d9394ea8
root@juju-23f84c-queens-dvr-8:~# tc qdisc show |grep qvo
qdisc htb 1: dev qvo19d440ef-5f root refcnt 2 r2q 10 default 1 direct_packets_stat 0 direct_qlen 1000
6, 其他, minimum-bandwidth
# minimum-bandwidth is for egress qos, but it's just supported by sriovnicswitch driver, NOT supported by openvswitch driver
openstack network qos policy create bandwidth-control
openstack network qos rule create --type minimum-bandwidth --min-kbps 512 --egress bandwidth-control
#openstack port set --qos-policy bandwidth-control 19d440ef-5f27-4c26-8dc6-c994d9394ea8
neutron port-update --qos-policy bandwidth-control 19d440ef-5f27-4c26-8dc6-c994d9394ea8
7, 其他, FIP QoS
qos从queens版本也支持对FIP设置egress与ingress的QoS, tc rules能设置在:
- qg device in qr ns for legacy and HA routers
- rfp device in qr ns for DVR local routers
- qg device in snat ns for DVR edge routers
openstack floating ip create --qos-policy ingress-qos-policy ext_net
#neutron port-update --qos-policy egress-qos-policy <FIP-port-id>
neutron floatingip-associate $(neutron floatingip-list |grep 10.5.150.12 |awk '{print $2}') $(neutron port-list |grep '192.168.21.6' |awk '{print $2}')
# 实际下面未看到输出, 有时间待查
sudo ip netns exec qrouter-xxx tc qdisc show dev qg-xxx
sudo ip netns exec qrouter-xxx tc -p -s -d filter show dev qg-xxx
ip netns exec qrouter-909c6b55-9bc6-476f-9d28-c32d031c41d7 tc qdisc show dev rfp-909c6b55-9
ip netns exec snat-909c6b55-9bc6-476f-9d28-c32d031c41d7 tc qdisc show dev qg-d5ed764e-a6
20240418 + ovn + dvr中的qos
客户使用了focal yoga ovn dvr 环境,说gw port上的qos不生效,但fip port上的qos生效。
我做了如下实验,似乎gw port和fip port上的qos都不生效,只是fixed-ip的qos生效了。
./tools/float_all.sh
ssh -i ~/testkey.priv ubuntu@10.5.150.84
openstack network qos policy create bw-limiter
openstack network qos rule create --type bandwidth-limit --max-kbps 800 --max-burst-kbits 800 --ingress bw-limiter
#<identify the GW port from the about output>
$ openstack port list --router provider-router |grep ACTIVE
| bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52 | | fa:16:3e:85:f0:d9 | ip_address='10.5.152.109', subnet_id='311da9d4-742f-4659-9acb-54547ae77ba0' | ACTIVE |
| f570a892-adb8-4a23-804c-b92fdf9c71ff | | fa:16:3e:09:44:56 | ip_address='192.168.21.1', subnet_id='81ab4f05-3d47-4a10-b35f-8930789c2792' | ACTIVE |
# set qos on gw port, No bandwidth throttling
openstack port set --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) $(openstack port list |grep '10.5.152.109' |awk '{print $2}')
ubuntu@jammy-041217:~$ wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
ubuntu-22.04-server-cloudimg-amd64.img 9%[========> ] 57.97M 21.4MB/s
# unset qos on gw port, and set qos on fip port, No bandwidth throttling
openstack port unset --qos-policy $(openstack port list |grep '10.5.152.109' |awk '{print $2}')
openstack port set --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) $(openstack port list |grep '10.5.150.84' |awk '{print $2}')
ubuntu@jammy-041217:~$ wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
Saving to: ‘ubuntu-22.04-server-cloudimg-amd64.mg.1 22%[=============> ] 140.34M 10.9MB/s eta 47s
# unset qos on gw port, and set qos on fip port, Bandwidth is throttled
openstack port unset --qos-policy $(openstack port list |grep '10.5.150.84' |awk '{print $2}')
openstack port set --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) $(openstack port list |grep '192.168.21.103' |awk '{print $2}')
ubuntu@jammy-041217:~$ wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
ubuntu-22.04-server-cloudimg-amd64.img.2 0%[ ] 584.00K 89.5KB/s eta 81m 36s
我的测试环境只是在一个focal-yoga-ovn环境上运行了‘juju config neutron-api enable-qos=true’而已。
juju config neutron-api enable-qos=true
$ juju ssh neutron-api/0 -- sudo grep -r qos /etc/neutron/
/etc/neutron/plugins/ml2/ml2_conf.ini:extension_drivers=port_security,dns_domain_ports,qos
/etc/neutron/neutron.conf:service_plugins = metering,segments,qos,ovn-router
$ juju ssh nova-compute/0 -- sudo grep -r qos /etc/neutron/ |grep -v '#'
<empty>
官网(https://docs.openstack.org/neutron/latest/admin/config-qos.html)提到了:
- 要想fip port上的qos要生效,需要在l3-agent上设置fip_qos(若有dvr,应该设置在所有l3-agent和nova-compute节点上),但这只是针对ovs的,那ovn呢?
- 要想gw port上的qos要生效,需要在l3-agent上设置gateway_ip_qos(若有dvr, 也需设置在所有有dvr_snat的节点上)。但这只是针对ovs的,那ovn呢?(注:后面看起来如果用ovn与用不用fip_qos和gateway_ip_qos无关)
[agent]
extensions = fip_qos, gateway_ip_qos
那个官网还提到了ovs实现在l3-agent上得用ovs_use_veth=true
As rate limit doesn’t work on Open vSwitch’s internal ports, optionally, as a workaround, to make QoS bandwidth limit work on router’s gateway ports, set ovs_use_veth to True in DEFAULT section in /etc/neutron/l3_agent.ini
[DEFAULT]
ovs_use_veth = True
所以关键是ovn怎么实现qos的, L3 services that provide QoS extensions:
- L3 router: implements the rate limit using Linux TC.
- OVN L3: implements the rate limit using the OVN QoS metering rules - https://man7.org/linux/man-pages/man8/ovn-nbctl.8.html#LOGICAL_SWITCH_QOS_RULE_COMMANDS
有一个workaround, 就是将QoS设置在network上而不是port上,
# unset qos on fip port, and set qos on network instead of port, Bandwidth is throttled
openstack port unset --qos-policy $(openstack port list |grep '192.168.21.103' |awk '{print $2}')
openstack network set --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) ext_net
#openstack network set --no-qos-policy ext_net
ubuntu@jammy-041217:~$ wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
ubuntu-22.04-server-cloudimg-amd64.img 0%[ ] 760.00K 90.4KB/s eta 87m 56s
root@juju-d90d38-yoga-9:/home/ubuntu# ovn-nbctl list qos
_uuid : ec92f7fd-6b38-4d92-a4b5-eef023f643ec
action : {}
bandwidth : {burst=800, rate=800}
direction : to-lport
external_ids : {"neutron:fip_id"="3bf513b4-805d-4bf9-96f8-1173bb4f95c0"}
match : "outport == \"bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52\" && ip4.dst == 10.5.150.84 && is_chassis_resident(\"cr-lrp-bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52\")"
priority : 2002
另外,用ovs代替ovn时,设置在gw port上是没有问题的,看来还是ovn的问题
#./generate-bundle.sh -s focal -r yoga --name yoga --num-compute 1 --run
./generate-bundle.sh -s focal -r yoga --name yogaovs --num-compute 1 --ml2-ovs --run
$ nova list |grep jammy
| 576f4db9-188e-403d-996e-105bb6d7387e | jammy-100651 | ACTIVE | - | Running | private=192.168.21.9, 10.5.153.215 |
$ openstack port list --router provider-router |grep ACTIVE
| 10f77b18-e49d-4e90-84a7-65b61294b427 | | fa:16:3e:12:48:56 | ip_address='10.5.152.193', subnet_id='5fde4c86-9a48-41a4-9eaa-10e924a6e9c1' | ACTIVE |
| 563887f5-ac7c-485b-a3e1-3dc413213653 | | fa:16:3e:c8:bf:60 | ip_address='192.168.21.1', subnet_id='f8274dda-34e8-4278-87e7-5bc88733fe9a' | ACTIVE |
openstack network qos policy create bw-limiter
openstack network qos rule create --type bandwidth-limit --max-kbps 800 --max-burst-kbits 800 --ingress bw-limiter
#set qos on gw port
openstack port set --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) $(openstack port list |grep '10.5.152.193' |awk '{print $2}')
ubuntu@jammy-100651:~$ wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
ubuntu-22.04-server-cloudimg-amd64.img.1 0%[ ] 584.00K 109KB/s eta 94m 44s
再做一个实验,用ovn命令在gw port (gw port不是logical_switch_port类型,而是logical_router_port类型,port id前得加lrp-前缀,另外fip属于ovn-nbctl find NAT type=dnat_and_snat, 对于fip的限速不属于ovn的事而是属于neutron networking-ovn的事见https://mail.openvswitch.org/pipermail/ovs-discuss/2020-March/049863.html , networking-ovn支持fip qos的patch是https://opendev.org/openstack/neutron/commit/e7e71b2ca67169e6de4cdad71f2c82059132325d) 上设置qos, 也没有被限速啊。
openstack network set --no-qos-policy ext_net
#<identify the GW port from the about output>
$ openstack port list --router provider-router |grep ACTIVE
| bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52 | | fa:16:3e:85:f0:d9 | ip_address='10.5.152.109', subnet_id='311da9d4-742f-4659-9acb-54547ae77ba0' | ACTIVE |
| f570a892-adb8-4a23-804c-b92fdf9c71ff | | fa:16:3e:09:44:56 | ip_address='192.168.21.1', subnet_id='81ab4f05-3d47-4a10-b35f-8930789c2792' | ACTIVE |
#openstack network qos rule create --type bandwidth-limit --max-kbps 800 --max-burst-kbits 800 --ingress bw-limiter
ovn-nbctl set logical_switch_port bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52 options:qos_max_kbps=800
ovn-nbctl set logical_switch_port bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52 options:qos_max_burst_kbits=800
root@juju-d90d38-yoga-9:/home/ubuntu# ovn-nbctl list logical_switch_port bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52 |grep -E qos
options : {exclude-lb-vips-from-garp="true", mcast_flood_reports="true", nat-addresses=router, qos_max_burst_kbits="800", qos_max_kbps="800", requested-chassis=juju-d90d38-yoga-8.cloud.sts, router-port=lrp-bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52}
ubuntu@jammy-041217:~$ wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
ubuntu-22.04-server-cloudimg-amd64.img 9%[========> ] 60.34M 24.2MB/s
ovn-nbctl set logical_router_port lrp-bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52 options:qos_max_kbps=800
ovn-nbctl set logical_router_port lrp-bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52 options:qos_max_burst_kbits=800
root@juju-d90d38-yoga-9:/home/ubuntu# ovn-nbctl list logical_router_port lrp-bc3bbaa7-8a23-4762-a8ce-c149fa5d8c52 |grep options
options : {qos_max_burst_kbits="800", qos_max_kbps="800"}
ubuntu@jammy-041217:~$ wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
ubuntu-22.04-server-cloudimg-amd64.img 4%[===>
查看代码应该就是patch 2d1b4fd80f 不在yoga (20.5.0)里了。
$ git log 20.5.0..master --oneline --no-merges neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/extensions/qos.py
e5d4499672 [ovn] Drop use of LR OVN_GW_NETWORK_EXT_ID_KEY
b1714a2b9d Fix some pylint indentation warnings
846737dac4 [OVN][QoS] Add minimum bandwidth rule support to ML2/OVN
15b826a05f [OVN] Implement GW IP network QoS inheritance
7c2420e3af Add "qos_policy_id" field to "Router" OVO
2d1b4fd80f [OVN] Implement router gateway IP QoS
看起来显然像上面的2d1b4fd80f, 但我用focal-zed(2:21.2.0-0ubuntu1~cloud0)也做了下列测试,没成功。
$ openstack port list --router provider-router |grep ACTIVE
| 88cc2be9-79b3-4c45-ab12-bff561541786 | | fa:16:3e:be:f8:ba | ip_address='10.5.150.56', subnet_id='8ef6a506-3ec3-404c-9403-890e4eeb844c' | ACTIVE |
| c46a76fb-f5c2-4239-8523-ab40c35e8954 | | fa:16:3e:6f:f3:ca | ip_address='192.168.21.1', subnet_id='ab935385-d93e-4d8a-a3f2-dbca1af43049' | ACTIVE |
$ openstack port set --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) $(openstack port list |grep '10.5.150.56' |awk '{print $2}')
ubuntu@jammy-075939:~$ wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
ubuntu-22.04-server-cloudimg-amd64.img.2 21%[====================> ] 135.64M 29.8MB/s eta 20s
root@juju-bfba10-zed-10:/home/ubuntu# ovn-nbctl list qos
<empty>
但仔细读了一下patch 2d1b4fd80f, 在zed中改成下列命令就OK了。
openstack router set --external-gateway ext_net --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) provider-router
$ openstack port show 88cc2be9-79b3-4c45-ab12-bff561541786 |grep qos
| qos_network_policy_id | None |
| qos_policy_id | None |
root@juju-bfba10-zed-10:/home/ubuntu# ovn-nbctl list qos
_uuid : 7dd2f740-940f-49c9-9359-a36b6552231d
action : {}
bandwidth : {burst=800, rate=800}
direction : to-lport
external_ids : {"neutron:router_id"="1158be72-7d2a-453b-8cc4-aad48384d0f1"}
match : "outport == \"88cc2be9-79b3-4c45-ab12-bff561541786\""
priority : 2002
root@juju-bfba10-zed-10:/home/ubuntu# ovn-nbctl list logical_router_port lrp-88cc2be9-79b3-4c45-ab12-bff561541786
_uuid : a83aebce-93a1-49b1-9159-ce4e665c3120
enabled : []
external_ids : {"neutron:network_name"=neutron-af6665eb-7386-45ce-b3ca-455441f4a38d, "neutron:revision_number"="6", "neutron:router_name"="1158be72-7d2a-453b-8cc4-aad48384d0f1", "neutron:subnet_ids"="8ef6a506-3ec3-404c-9403-890e4eeb844c"}
gateway_chassis : [aaf54827-2fe6-4af9-bcca-706ff368df51]
ha_chassis_group : []
ipv6_prefix : []
ipv6_ra_configs : {}
mac : "fa:16:3e:be:f8:ba"
name : lrp-88cc2be9-79b3-4c45-ab12-bff561541786
networks : ["10.5.150.56/16"]
options : {}
peer : []
当然在yoga中因为没有patch 2d1b4fd80f所以是报错的:
$ openstack router set --external-gateway ext_net --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) provider-router
The option [tenant_id] has been deprecated. Please avoid using it.
The option [tenant_id] has been deprecated. Please avoid using it.
BadRequestException: 400: Client Error for url: https://10.5.2.200:9696/v2.0/routers/014c33dd-03ac-4ad1-9998-665eb4e1d1ee, Invalid input for external_gateway_info. Reason: Unexpected keys supplied: qos_policy_id.
所以总结一下:
- FIP QoS直接用neutron networking-ovn即可. 与fip_qos和gateway_ip_qos无关
- gateway port QoS得有2d1b4fd80f (故至少是zed), 另外正确的设置命令是这样的: openstack router set --external-gateway ext_net --qos-policy $(openstack network qos policy show bw-limiter -fvalue -cid) provider-router . 只和‘ovn-nbctl list qos’有关,和’ovn-nbctl list logical_router_port lrp-88cc2be9-79b3-4c45-ab12-bff561541786 '里的options无关。
- 这里提到的在logical-switch-port上的ovn qos应该和options有关,但那应该是针对fixed-ip的qos (我没测这种,只是推测) - https://www.cnblogs.com/gaozhengwei/p/7100051.html
参考
1, http://zhaozhanxu.com/2017/02/06/SDN/OVS/2017-02-06-qos/
2, https://github.com/openvswitch/ovs/blob/master/lib/netdev-linux.c#L4707
3, http://docs.openvswitch.org/en/latest/faq/qos/
4, http://www.cnblogs.com/popsuper1982/p/3803807.html
5, https://mail.openvswitch.org/pipermail/ovs-discuss/2016-October/042681.html
6, http://lib.csdn.net/article/computernetworks/31736
7, http://dannykim.me/danny/openflow/57771?ckattempt=2
8, https://www.openstack.org/assets/presentation-media/What-is-new-in-Neutron-QoS.pdf