搭建高可用OpenStack(Queen版)集群(六)之部署Neutron控制/网络节点集群

5 篇文章 0 订阅
1 篇文章 0 订阅

一、搭建高可用OpenStack(Queen版)集群之部署Neutron控制/网络节点集群

  一、OpenStack Neutron简介

  1、概述

  Openstack Networking(neutron),允许创建、插入接口设备,这些设备由其他openstack服务管理。

  openstack网络主要和openstack计算交互,以提供网络连接到它的实例。

  2、neutron包含的组件

    (1)neutron-server
        接收和路由APi请求到合适的openstack网络插件,以达到预想的目的。
    (2)openstack网络插件和代理
        插拔端口,创建网络和子网,以及提供IP地址,这些插件和代理依赖供应商和技术而不同。例如:Linux Bridge、 Open vSwitch
    (3)消息队列
        大多数的openstack networking安装都会用到,用于在neutron-server和各种各样的代理进程间路由信息。也为某些特定的插件扮演数据库的角色

  3、网络工作模式和概念(虚拟化网络)

        [ KVM ] 四种简单的网络模型
        [ KVM 网络虚拟化 ] Openvswitch

  二、部署Neutron控制/网络节点集群

  网卡信息根据自己情况进行修改

  1、创建neutron数据库

  在任意控制节点创建数据库,后台数据自动同步

  mysql -u root -p

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
flush privileges;
exit;
  2、创建neutron-api

  在任意控制节点操作

  调用neutron服务需要认证信息,加载环境变量脚本即可

 . admin-openrc
    1、创建neutron用户

  service项目已在glance章节创建,neutron用户在”default” domain中

[root@controller01 ~]# openstack user create --domain default --password=neutron_pass neutron
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 5191f36f92854185a18538ccbcac39ed |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
    2、neutron赋权

  为neutron用户赋予admin权限(没有返回值)

openstack role add --project service --user neutron admin
    3、创建neutron服务实体

  neutron服务实体类型”network”

[root@controller01 ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | db519ab1d6654bf8af0cccabddf5a0cc |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+
    4、创建neutron-api

  注意:

  1. region与初始化admin用户时生成的region一致;
  2. api地址统一采用vip,如果public/internal/admin分别使用不同的vip,请注意区分;
  3. neutron-api 服务类型为network;
# neutron-api 服务类型为network;
# public api
[root@controller01 ~]# openstack endpoint create --region RegionTest network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 87bb1951240b4cce8b56406642a0d169 |
| interface    | public                           |
| region       | RegionTest                       |
| region_id    | RegionTest                       |
| service_id   | db519ab1d6654bf8af0cccabddf5a0cc |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

# internal api
[root@controller01 ~]# openstack endpoint create --region RegionTest network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ab8bfd0e17b945e7bd54c44514965d9f |
| interface    | internal                         |
| region       | RegionTest                       |
| region_id    | RegionTest                       |
| service_id   | db519ab1d6654bf8af0cccabddf5a0cc |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |

# admin api
[root@controller01 ~]# openstack endpoint create --region RegionTest network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | adbcdd77fd9347ed95023a93b62edcff |
| interface    | admin                            |
| region       | RegionTest                       |
| region_id    | RegionTest                       |
| service_id   | db519ab1d6654bf8af0cccabddf5a0cc |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
  3、安装neutron

  在全部控制节点安装neutron相关服务

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
  4、配置neutron.conf 

  在全部控制节点操作

  注意:

  1. ”bind_host”参数,根据节点修改;
  2. neutron.conf文件的权限:root:neutron
cp -rp /etc/neutron/neutron.conf{,.bak}
egrep -v "^$|^#" /etc/neutron/neutron.conf
[DEFAULT]
bind_host = 10.20.9.189
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
# l3高可用,可以采用vrrp模式或者dvr模式;
# vrrp模式下,在各网络节点(此处网络节点与控制节点混合部署)以vrrp的模式设置主备virtual router;mater故障时,virtual router不会迁移,而是将router对外服务的vip漂移到standby router上; 
# dvr模式下,三层的转发(L3 Forwarding)与nat功能都会被分布到计算节点上,即计算节点也有了网络节点的功能;但是,dvr依然不能消除集中式的virtual router,为了节省IPV4公网地址,仍将snat放在网络节点上提供;
# vrrp模式与dvr模式不可同时使用
# Neutron L3 Agent HA 之 虚拟路由冗余协议(VRRP): http://www.cnblogs.com/sammyliu/p/4692081.html
# Neutron 分布式虚拟路由(Neutron Distributed Virtual Routing): http://www.cnblogs.com/sammyliu/p/4713562.html
# “l3_ha = true“参数即启用l3 ha功能
l3_ha = true
# 最多在几个l3 agent上创建ha router
max_l3_agents_per_router = 3
# 可创建ha router的最少正常运行的l3 agnet数量
min_l3_agents_per_router = 2
# vrrp广播网络
l3_ha_net_cidr = 169.254.192.0/18
# ”router_distributed “参数本身的含义是普通用户创建路由器时,是否默认创建dvr;此参数默认值为“false”,这里采用vrrp模式,可注释此参数
# 虽然此参数在mitaka(含)版本后,可与l3_ha参数同时打开,但设置dvr模式还同时需要设置网络节点与计算节点的l3_agent.ini与ml2_conf.ini文件
# router_distributed = true
# dhcp高可用,在3个网络节点各生成1个dhcp服务器
dhcp_agents_per_network = 3
# 前端采用haproxy时,服务连接rabbitmq会出现连接超时重连的情况,可通过各服务与rabbitmq的日志查看;
# transport_url = rabbit://openstack:openstack@controller:5673
# rabbitmq本身具备集群机制,官方文档建议直接连接rabbitmq集群;但采用此方式时服务启动有时会报错,原因不明;如果没有此现象,强烈建议连接rabbitmq直接对接集群而非通过前端haproxy
transport_url=rabbit://openstack:openstack@controller01:5672,openstack:openstack@controller02:5672,openstack:openstack@controller03:5672
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller01:11211,controller02:11211,controller03:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron_pass
[matchmaker_redis]
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionTest
project_name = service
username = nova
password = nova_pass
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]
  5、配置ml2_conf.ini

  在全部控制节点操作

  注意:ml2_conf.ini文件的权限:root:neutron 

    单网卡需要设置:flat_networks = provider

cp -rp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
cat> /etc/neutron/plugins/ml2/ml2_conf.ini<<EOF
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,vxlan
# ml2 mechanism_driver 列表,l2population对gre/vxlan租户网络有效
mechanism_drivers = linuxbridge,l2population
# 可同时设置多种租户网络类型,第一个值是常规租户创建网络时的默认值,同时也默认是master router心跳信号的传递网络类型
tenant_network_types = vlan,vxlan,flat
extension_drivers = port_security
[ml2_type_flat]
# 指定flat网络类型名称为”external”,”*”表示任意网络,空值表示禁用flat网络
flat_networks = external
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
# 指定vlan网络类型的网络名称为”vlan”;如果不设置vlan id则表示不受限
network_vlan_ranges = vlan:3001:3500
[ml2_type_vxlan]
vni_ranges = 10001:20000
[securitygroup]
enable_ipset = true
EOF

  服务初始化调用ml2_conf.ini中的配置,但指向/etc/neutron/olugin.ini文件

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  6、配置linuxbridge_agent.ini

  在全部控制节点操作

    1、配置linuxbridge_agent.ini

  注意:linuxbridge_agent.ini文件的权限:root:neutron

    单网卡需要设置:physical_interface_mappings = provider:ens192

cp -rp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
cat>/etc/neutron/plugins/ml2/linuxbridge_agent.ini<<EOF
[DEFAULT]
[agent]
[linux_bridge]
# 网络类型名称与物理网卡对应,这里flat external网络对应规划的eth1,vlan租户网络对应规划的eth3,在创建相应网络时采用的是网络名称而非网卡名称;
# 需要明确的是物理网卡是本地有效,根据主机实际使用的网卡名确定;
# 另有” bridge_mappings”参数对应网桥
physical_interface_mappings = external:eth1,vlan:eth3
[network_log]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
# tunnel租户网络(vxlan)vtep端点,这里对应规划的eth2(的地址),根据节点做相应修改
local_ip = 10.0.0.31
l2_population = true
EOF
sed -i 's/10.0.0.31/10.20.9.189/g' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    2、配置内核参数
  • bridge:是否允许桥接;
  • 如果“sysctl -p”加载不成功,报” No such file or directory”错误,需要加载内核模块“br_netfilter”;
  • 命令“modinfo br_netfilter”查看内核模块信息;
  • 命令“modprobe br_netfilter”加载内核模块
echo "# bridge" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
sysctl -p

  报错如下

# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

  解决办法

[root@controller01 ml2]#  modprobe br_netfilter
[root@controller01 ml2]#  ls /proc/sys/net/bridge
bridge-nf-call-arptables  bridge-nf-call-iptables        bridge-nf-filter-vlan-tagged
bridge-nf-call-ip6tables  bridge-nf-filter-pppoe-tagged  bridge-nf-pass-vlan-input-dev
[root@controller01 ml2]#  sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
  7、配置l3_agent.ini(self-networking)

  在全部控制节点操作

  注意:l3_agent.ini文件的权限:root:neutron

cp -rp /etc/neutron/l3_agent.ini{,.bak}

# egrep -v "^$|^#"  /etc/neutron/l3_agent.ini

cat>/etc/neutron/l3_agent.ini<<EOF
[DEFAULT]
interface_driver = linuxbridge
[agent]
[ovs]
EOF
  8、配置dhcp_agent.ini

  在全部控制节点操作

  使用dnsmasp提供dhcp服务;

  dhcp_agent.ini文件的权限:root:neutron

cp -rp /etc/neutron/dhcp_agent.ini{,.bak}

# egrep -v "^$|^#" /etc/neutron/dhcp_agent.ini

cat>/etc/neutron/dhcp_agent.ini<<EOF
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[agent]
[ovs]
EOF
  9、配置metadata_agent.ini

  在全部控制节点操作

  注意:

  1. metadata_proxy_shared_secret:与/etc/nova/nova.conf文件中参数一致;
  2. metadata_agent.ini文件的权限:root:neutron
cp -rp /etc/neutron/metadata_agent.ini{,.bak}

# egrep -v "^$|^#"  /etc/neutron/metadata_agent.ini
cat>/etc/neutron/metadata_agent.ini<<EOF
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = neutron_metadata_secret
[agent]
[cache]
EOF
  10、配置nova.conf

  在全部控制节点操作

  注意:

  1. 配置只涉及nova.conf的”[neutron]”字段;
  2. metadata_proxy_shared_secret:与/etc/neutron/metadata_agent.ini文件中参数一致
  3. 在/etc/nova/nova.conf添加如下内容
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionTest
project_name = service
username = neutron
password = neutron_pass
service_metadata_proxy = true
metadata_proxy_shared_secret = neutron_metadata_secret
  11、同步neutron数据库

  任意控制节点操作

  同步数据(需要时间有点长,最后返回ok表示正常

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

  验证

mysql -h controller01 -u neutron -p123456 -e "use neutron;show tables;"
  12、启动服务

  全部控制节点操作

    1、变更nova配置文件,首先需要重启nova服务
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
    2、启动并设置开机启动
systemctl enable neutron-server.service \
 neutron-linuxbridge-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service

systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-l3-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service
    3、查看服务状态
systemctl  status neutron-server.service \
 neutron-linuxbridge-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service
  13、 验证
. admin-openrc 

  查看加载的扩展服务(因为数据有点多,就不展示了)

openstack extension list --network

  查看agent服务

[root@controller01 neutron]# openstack network agent list

+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+
| 1f010128-4955-4859-8cc5-7c065fdaa810 | Metadata agent | controller02 | None | :-) | UP | neutron-metadata-agent |
| 1f513f25-3679-43b5-8211-0935829f1022 | Linux bridge agent | controller01 | None | :-) | UP | neutron-linuxbridge-agent |
| 543aea85-d06f-448a-85fa-ff704dcc164b | Linux bridge agent | controller03 | None | :-) | UP | neutron-linuxbridge-agent |
| 62152d6e-d159-4960-b79d-de5b78e395b7 | DHCP agent | controller01 | nova | :-) | UP | neutron-dhcp-agent |
| 7950dcad-c195-4980-9c90-038be596a88c | L3 agent | controller02 | nova | :-) | UP | neutron-l3-agent |
| 9eb8f181-c422-4e48-9ee7-fa21df2abb9b | Linux bridge agent | controller02 | None | :-) | UP | neutron-linuxbridge-agent |
| a4305c36-0f77-441b-ab8e-ab3e21ea4ffb | DHCP agent | controller03 | nova | :-) | UP | neutron-dhcp-agent |
| b60b303c-b3f8-4b6d-9f9b-efe019b7d2b5 | Metadata agent | controller03 | None | :-) | UP | neutron-metadata-agent |
| e2748453-bde1-493a-85b9-e5aec12c87f5 | L3 agent | controller03 | nova | :-) | UP | neutron-l3-agent |
| ee3a0871-5451-4373-83ce-aefaefdbe6d3 | DHCP agent | controller02 | nova | :-) | UP | neutron-dhcp-agent |
| f4c26d8d-7e94-4d0b-a85b-373d55ca3402 | L3 agent | controller01 | nova | :-) | UP | neutron-l3-agent |
| fd448e65-cb6f-43c3-a98a-adba06f73176 | Metadata agent | controller01 | None | :-) | UP | neutron-metadata-agent |
+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+

  14、设置pcs资源
    1、添加资源neutron-server,neutron-linuxbridge-agent,neutron-l3-agent,neutron-dhcp-agent与neutron-metadata-agent
pcs resource create neutron-server systemd:neutron-server --clone interleave=true
pcs resource create neutron-linuxbridge-agent systemd:neutron-linuxbridge-agent --clone interleave=true
pcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=true
pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=true
pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent --clone interleave=true
    2、查看pcs资源
[root@controller01 neutron]# pcs resource
 vip    (ocf::heartbeat:IPaddr2):    Started controller01
 Clone Set: lb-haproxy-clone [lb-haproxy]
     Started: [ controller01 ]
     Stopped: [ controller02 controller03 ]
 Clone Set: openstack-keystone-clone [openstack-keystone]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-linuxbridge-agent-clone [neutron-linuxbridge-agent]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Started: [ controller01 controller02 controller03 ]
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
     Started: [ controller01 controller02 controller03 ]

二、OpenStack清除网络和路由

  Openstack 清除openstack网络与路由

  • 8
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值