版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明 (作者:张华 发表于:2019-04-24)
问题
OpenStack可以支持多个公网IP吗?
OpenStack CLI不支持将同一个子网的多个FIPs分配给同一个fixed_ip的接口. 最简单的方法是直接分配FIP后手工配置在qrouter-xxx上, 并且手工做DNAT/SNAT. 但还有其他更好的方法吗?
sudo ip netns exec qrouter-9843a7c4-f1e7-4ea0-b031-c1a0428795be ip addr add 10.5.150.15 dev qg-60d62ebb-47
sudo ip netns exec qrouter-9843a7c4-f1e7-4ea0-b031-c1a0428795be iptables -t nat -A neutron-l3-agent-OUTPUT -d 10.5.150.15/32 -j DNAT --to-destination 192.168.22.53
sudo ip netns exec qrouter-9843a7c4-f1e7-4ea0-b031-c1a0428795be iptables -t nat -A neutron-l3-agent-PREROUTING -d 10.5.150.15/32 -j DNAT --to-destination 192.168.22.53
sudo ip netns exec qrouter-9843a7c4-f1e7-4ea0-b031-c1a0428795be iptables -t nat -A neutron-l3-agent-float-snat -s 192.168.22.53/32 -j SNAT --to-source 10.5.150.15
ubuntu@juju-54f223-pike-5:~$ ping 10.5.150.15
PING 10.5.150.15 (10.5.150.15) 56(84) bytes of data.
64 bytes from 10.5.150.15: icmp_seq=1 ttl=63 time=3.25 ms
创建多个tenant子网
创建多个tenant子网(fixed_ip), 将这些子网分配给VM, 然后再将多个FIP分别和这些tenant port关联. 结果是虚机内部多个网卡造成了不对称路由问题, 引入policy route之后解决了outbound的问题, 但仍然无法解决inbound的问题.
step 1, created a test VM with the network private (192.168.21.0/24) and a fix_ip 192.168.21.5 and a FIP 10.5.150.2
openstack server create --wait --image bionic --flavor m1.small --key-name mykey --nic net-id=e4074ac4-3c48-41f2-81e2-5a798468bf88 --min 1 --max 1 test
fix_ip="192.168.21.5"
public_network=$(openstack network show ext_net -f value -c id)
fip=$(openstack floating ip create $public_network -f value -c floating_ip_address)
openstack floating ip set $fip --fixed-ip-address $fix_ip --port $(openstack port list --fixed-ip ip-address=$fix_ip -c id -f value)
step 2, created another network private2 (192.168.22.0/24) and attach another interface into the test VM, then allocated a FIP (192.168.22.53) for it (192.168.22.51)
openstack network create private2
openstack subnet create --subnet-range 192.168.22.0/24 --network private2 --allocation-pool start=192.168.22.50,end=192.168.22.100 --gateway 192.168.22.1 priavte2-subnet
openstack router add subnet provider-router priavte2-subnet
nova interface-attach $(openstack server list -f value |awk '/test/ {print $1}') --net-id=$(openstack network list -f value |awk '/private2/ {print $1}')
ssh ubuntu@10.5.150.2 -- sudo ifconfig ens5 up
# then connection is disconnect because the default route moves from ens2 to ens5
ssh ubuntu@10.5.150.2 -- dhcpclient ens5
Finally it looks like - https://paste.ubuntu.com/p/yMJpJXYqDY/
step 3, now the default gw is in ens5 so we can ping external inside test VM
root@test:~# ping -I ens2 10.230.65.38
PING 10.230.65.38 (10.230.65.38) from 192.168.21.5 ens2: 56(84) bytes of data.
root@test:~# ping -I ens5 10.230.65.38
PING 10.230.65.38 (10.230.65.38) from 192.168.22.53 ens5: 56(84) bytes of data.
64 bytes from 10.230.65.38: icmp_seq=1 ttl=62 time=3.34 ms
step 4, outbound traffic works after adding policy router rules.
echo "1 t21" >> /etc/iproute2/rt_tables
ip route add 192.168.21.0/24 dev ens2 src 192.168.21.5 table t21
ip route add default via 192.168.21.5 dev ens2 table t21
ip rule add from 192.168.21.5/24 table t21
root@test:~# ping -I ens2 10.230.65.38
PING 10.230.65.38 (10.230.65.38) from 192.168.21.5 ens2: 56(84) bytes of data.
64 bytes from 10.230.65.38: icmp_seq=1 ttl=62 time=6.28 ms
root@test:~# ping -I ens5 10.230.65.38
PING 10.230.65.38 (10.230.65.38) from 192.168.22.53 ens5: 56(84) bytes of data.
64 bytes from 10.230.65.38: icmp_seq=1 ttl=62 time=4.90 ms
step 5, but inbound traffic for two FIPs can't work at the same time
ubuntu@zhhuabj-bastion:~$ ping -c 1 10.5.150.2
PING 10.5.150.2 (10.5.150.2) 56(84) bytes of data.
ubuntu@zhhuabj-bastion:~$ ping -c 1 10.5.150.8
PING 10.5.150.8 (10.5.150.8) 56(84) bytes of data.
64 bytes from 10.5.150.8: icmp_seq=1 ttl=63 time=3.41 ms
创建多个FIP子网
创建多个外部网络的方法也不可行, 因为多个外部网络意味着多个router, 但无法将一个tenant subnet添加到多个routers中去:
ubuntu@zhhuabj-bastion:~$ neutron router-interface-add provider-router2 private_subnet
IP address 192.168.21.1 already allocated in subnet ada0b5b0-6780-4131-b07b-cee7069eab8d
Neutron server returns request_ids: ['req-8110ac5