作者:张华 发表于:2015-12-31
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
( http://blog.csdn.net/quqi99 )
20190430更新 - 可以使用"juju config neutron-openvswitch enable-local-dhcp-and-metadata=True"解决.
用外部物理路由器意味着不使用neutron-l3-agent, 故定义网络时需指定--router:external=True
neutron net-create phy_net -- --router:external=True --provider:network_type flat --provider:physical_network physnet1
上例使用了flat网络,故要求配置bridge_mappings = physnet1:br-phy (bridge_mapping仅针对flat与vlan有效)。不配置bridge_mappings属性创建虚机会失败(报bind_failed错误)。报bind_failed错误可能还会有一个原因就是neutron.conf里的agent_down_time设置得过小,导致心跳认为它dead从而在bind里找不到agent所致。
/etc/neutron/plugins/ml2/ml2_conf.ini:
[ml2]
tenant_network_types = flat,vlan,gre,vxlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch
[ovs]
bridge_mappings = physnet1:br-phy
使用devstack安装时的配置参数是:
Q_ML2_TENANT_NETWORK_TYPE=flat,vlan,gre,vxlan
OVS_BRIDGE_MAPPINGS=physnet1:br-phy
目前只有l3-agent与dhcp-agent提供了metadata服务。metadata namespace proxy识别不同tenant下来的元数据流量,通过unix socket与metadata agent相连, metadata agent向nova-metadata-api传递一些需要的HTTP Headers代理访问nova-metadata-api服务。如果我们不使用l3-agent与dhcp-agent提供的metadata服务的话,我们自己写的程序要必须做识别namespace与传递HTTP headers两件事情。所以我们要想使用metadata服务还是应该使用l3-agent与dhcp-agent提供的metadata服务。
使用dhcp-agent提供的metadata服务的配置如下:
demo@openstack:~$ grep -r '^enable_' /etc/neutron/dhcp_agent.ini
enable_isolated_metadata = True
enable_metadata_network = True
demo@openstack:~$ grep -r '^enable_' /etc/neutron/l3_agent.ini
enable_metadata_proxy = False
enable_isolated_metadata = True
enable_metadata_network = True
demo@openstack:~$ grep -r '^enable_' /etc/neutron/l3_agent.ini
enable_metadata_proxy = False
另外,上述的enable_isolated_metadata = True 意味着必须是真正的隔离网络,所以非隔离网络是指有一个port指向subnet,且这个port的gateway ip是subnet的gateway ip。所以有下列三种方式处理:
- 使用--no-gateway创建真正的隔离网络:subnet-create net1 172.17.17.0/24 --no-gateway --name=sub1
- 或者不创建neutron router,通过--router:external=True使用外部路由:neutron net-create phy_net -- --router:external=True --provider:network_type flat --provider:physical_network physnet1
- 配置强制使用metadata服务, force_metadata=True
使用l3-agent提供的metadata服务的配置如下, 不使用dhcp-agent可以停掉dhcp-agent进程也就不需要配置下面和dhcp-agent的配置了,但需要设置dhcp_agent_notification=False避免依赖):
demo@openstack:~$ grep -r '^enable_' /etc/neutron/dhcp_agent.ini
enable_isolated_metadata = True
enable_metadata_network = False
demo@openstack:~$ grep -r '^enable_' /etc/neutron/l3_agent.ini
enable_metadata_proxy = True
enable_metadata_proxy = True
现在使用外部dhcp服务器, 所以我们可以使用l3-agent来仅仅提供metadata服务,但正常的l3流量仍然走外部路由器。
neutron subnet-create --allocation-pool start=172.16.1.102,end=172.16.1.126 --gateway 172.16.1.2 phy_net 172.16.1.101/24 --enable_dhcp=False --name=phy_subnet_without_dhcp
那么虚机必须配置静态路由:metadata流量走neutron-l3-agent(如neuton提供的gateway: 172.16.1.2),正常的l3流量走外部路由器(如外部网关的IP: 172.16.1.1)。有两种方式可以做到:
1, 做镜像时就将静态路由写死在镜像里。
2, 使用外部dhcp服务器提供的静态路由功能。例如使用dnamsq的话,配置应该如下:
# 不清楚为什么下面对openstack的虚机一定要使用qbr59bbcb56-86,我使用br-phy没有成功,但用它成功了。
不清楚为什么下面对openstack的虚机一定要使用qbr59bbcb56-86,我使用br-phy没有成功,但用它成功了。
sudo ifconfig qbr59bbcb56-86 172.16.1.99/24
sudo dnsmasq --strict-order --bind-interfaces -i qbr59bbcb56-86 --dhcp-range=set:tag0,172.16.1.100,172.16.1.109,2h --dhcp-optsfile=/home/demo/opts -d
demo@openstack:~/devstack$ route -n |grep qbr
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 qbr59bbcb56-86
demo@openstack:~$ cat /home/demo/opts
tag:tag0,option:classless-static-route,169.254.169.254/32,172.16.1.2,0.0.0.0/0,172.16.1.1
tag:tag0,249,169.254.169.254/32,172.16.1.2,0.0.0.0/0,172.16.1.1
tag:tag0,option:router,172.16.1.1
确保虚机允许来自172.16.1.0/24网段的dhcp响应:
-A neutron-openvswi-i59bbcb56-8 -s 172.16.1.0/24 -p udp -m udp --sport 67 --dport 68 -j RETURN
-A neutron-openvswi-o59bbcb56-8 -p udp -m udp --sport 68 --dport 67 -m comment --comment "Allow DHCP client traffic." -j RETURN
$ sudo udhcpc eth0
udhcpc (v1.20.1) started
WARN: '/usr/share/udhcpc/default.script' should not be used in cirros. Replaced by cirros-dhcpc.
Sending discover...
Sending select for 172.16.1.100...
Lease of 172.16.1.100 obtained, lease time 7200
WARN: '/usr/share/udhcpc/default.script' should not be used in cirros. Replaced by cirros-dhcpc.
$
demo@openstack:~$ sudo dnsmasq --strict-order --bind-interfaces -i qbr59bbcb56-86 --dhcp-range=set:tag0,172.16.1.100,172.16.1.109,2h --dhcp-optsfile=/home/demo/opts -d
dnsmasq: started, version 2.68 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth
dnsmasq-dhcp: DHCP, IP range 172.16.1.100 -- 172.16.1.109, lease time 2h
dnsmasq-dhcp: DHCP, sockets bound exclusively to interface qbr59bbcb56-86
dnsmasq: reading /etc/resolv.conf
dnsmasq: using nameserver 192.168.100.1#53
dnsmasq: read /etc/hosts - 5 addresses
dnsmasq-dhcp: read /home/demo/opts
dnsmasq-dhcp: DHCPDISCOVER(qbr59bbcb56-86) fa:16:3e:79:1e:2c
dnsmasq-dhcp: DHCPOFFER(qbr59bbcb56-86) 172.16.1.100 fa:16:3e:79:1e:2c
dnsmasq-dhcp: DHCPREQUEST(qbr59bbcb56-86) 172.16.1.100 fa:16:3e:79:1e:2c
dnsmasq-dhcp: DHCPACK(qbr59bbcb56-86) 172.16.1.100 fa:16:3e:79:1e:2c
改用sudo cirros-dhcpc up eth0成功。
$ sudo cirros-dhcpc up eth0
udhcpc (v1.20.1) started
demo@openstack:~/devstack$ ping -c 1 172.16.1.100
PING 172.16.1.100 (172.16.1.100) 56(84) bytes of data.
64 bytes from 172.16.1.100: icmp_seq=1 ttl=64 time=0.400 ms
--- 172.16.1.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms
demo@openstack:~$ ssh cirros@172.16.1.100
The authenticity of host '172.16.1.100 (172.16.1.100)' can't be established.
RSA key fingerprint is fe:f2:85:fd:81:96:3c:94:78:a4:be:b0:41:59:ca:37.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.1.100' (RSA) to the list of known hosts.
cirros@172.16.1.100's password:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.16.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.169.254 172.16.1.2 255.255.255.255 UGH 0 0 0 eth0
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
$ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:79:1e:2c brd ff:ff:ff:ff:ff:ff
inet 172.16.1.100/24 brd 172.16.1.255 scope global eth0
inet6 fe80::f816:3eff:fe79:1e2c/64 scope link
valid_lft forever preferred_lft forever
虚机的虚数据流量通过上面的静态路由走到l3-agent节点上之后,下面的iptables rule会将metadata流量导到9697端口的neutron-ns-metadata-proxy进程上。(若是dhcp-agent提供的metadata server,端口是80,所以也就不需要这个规则了)。 至此,流程已通。
demo@openstack:~$ sudo ip netns exec qrouter-05591292-1191-4f50-9503-215b6962aaec iptables-save |grep 9697
-A neutron-vpn-agen-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-vpn-agen-INPUT -p tcp -m tcp --dport 9697 -j DROP
demo@openstack:~$ ps -ef|grep metadata
demo 9648 9615 1 09:54 pts/15 00:01:28 python /usr/local/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini
demo 10057 1 0 09:54 ? 00:00:00 /usr/bin/python /usr/local/bin/neutron-ns-metadata-proxy --pid_file=/opt/stack/data/neutron/external/pids/05591292-1191-4f50-9503-215b6962aaec.pid --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy --router_id=05591292-1191-4f50-9503-215b6962aaec --state_path=/opt/stack/data/neutron --metadata_port=9697 --metadata_proxy_user=1000 --metadata_proxy_group=1000 --verbose