OpenStack Queens手动部署

OpenStack Queens搭建文档

操作系统:CentOS-7-x86_64-Minimal-1908.iso

节点名称CPU内存磁盘
controller2*24GB40GB
compute012*24GB40GB

管理网络(management/API)
提供系统管理相关功能,,用于节点之间各服务组件内部通信以及对数据库服务的访问,所有节点都需要连接到管理网络
API网络,openstack各组件通过API网络向用户提供API服务

隧道网络(tunnel/self-service)
租户网络,提供租户虚拟网络(VXLAN/GRE),采用点到点的通信协议
对应openstack网络部署NetWorking Option 2:Self-service networks

外部网络(external/provider)
提供openstack软件包安装部署网络,openstack外网通信和浮动IP
对应openstack网络部署NetWorking Option 1:Provider networks

注意:Vmware Workstation虚拟化引擎要选择**虚拟化Intel VT-x/EPT或AMD-V/RVI(V)**选项

节点名称网卡名称网卡模式虚拟交换机网络类型IP地址网关
controllerens33仅主机模式vmnet1管理网络192.168.90.70
ens34仅主机模式vmnet2隧道网络192.168.91.70
ens35NAT模式vmnet8外部网络192.168.186.70192.168.186.2/24
compute01ens33仅主机模式vmnet1管理网络192.168.90.71
ens34仅主机模式vmnet2隧道网络192.168.91.71
ens35NAT模式vmnet8外部网络192.168.186.71192.168.186.2/24

环境

网络

以下操作在控制节点上进行

修改主机名
hostnamectl set-hostname controller

编辑/etc/sysconfig/network-scripts/ifcfg-ens33

ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.90.70
PREFIX=24

编辑/etc/sysconfig/network-scripts/ifcfg-ens34

ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.91.70
PREFIX=24

编辑/etc/sysconfig/network-scripts/ifcfg-ens35

ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.186.70
NETMASK=255.255.255.0
DNS1=114.114.114.114
DNS2=8.8.8.8

以下操作在计算节点上进行

修改主机名
hostnamectl set-hostname compute01

编辑/etc/sysconfig/network-scripts/ifcfg-ens33

ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.90.71
PREFIX=24

编辑/etc/sysconfig/network-scripts/ifcfg-ens34

ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.91.71
PREFIX=24

编辑/etc/sysconfig/network-scripts/ifcfg-ens35

ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.186.71
NETMASK=255.255.255.0
DNS1=114.114.114.114
DNS2=8.8.8.8

以下操作在所有节点上进行

关闭防火墙
systemctl disable firewalld && systemctl stop firewalld

禁用selinux
编辑/etc/selinux/config
将enforcing修改为disabled
#sed -i ‘s/enforcing/disabled/g’ /etc/selinux/config

修改节点名称
编辑/etc/hosts

# controller
192.168.90.70 controller
# compute01
192.168.90.71 compute1

重启网络
service network restart

验证

以下操作在控制节点上进行
controller和compute01互ping

ping compute01

NTP时间同步

以下操作在控制节点上进行

yum install chrony -y

编辑/etc/chrony.conf

#添加
server NTP_SERVER iburst
allow 192.168.186.0/24

以下操作在计算节点上进行

yum install chrony -y

编辑/etc/chrony.conf

#删除其余所有的server iburst
#添加
server controller iburst

验证

以下操作在所有节点上进行

chronyc sources
MS列中包含^*的行,指明NTP服务当前同步的服务器
timedatectl
查看当前时间是否准确,其中NTP synchronized: yes说明同步成功

基础包

以下操作在所有节点上进行

yum install centos-release-openstack-queens -y

yum upgrade

yum install python-openstackclient -y

yum install openstack-selinux -y

Mariadb数据库

以下操作在控制节点上进行

yum install mariadb mariadb-server python2-PyMySQL -y

创建并编辑/etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 192.168.90.70  #使其他节点能通过管理网络访问控制节点数据库
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

添加自启动并重启mariadb服务
systemctl enable mariadb.service && systemctl start mariadb.service

数据库初始化服务,账户:root,密码:123456
#mysql_secure_installation
#Disallow root login remotely? [Y/n] n(选择n)其余都是y
echo -e "\nY\n123456\n123456\nY\nn\nY\nY\n" | mysql_secure_installation

rabbitmq消息队列

以下操作在控制节点上进行

yum install rabbitmq-server -y

添加自启动并重启rabbitmq服务
systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service

添加用户,用户名:openstack,密码:123456
rabbitmqctl add_user openstack 123456

为openstack用户增加配置、读取以及写入相关权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

memcached缓存数据库

以下操作在控制节点上进行

yum install memcached python-memcached -y

编辑/etc/sysconfig/memcached

OPTIONS="-l 192.168.90.70,::1,controller" #使其他节点能通过管理网络访问控制节点缓存数据库

添加自启动并重启memcached服务
systemctl enable memcached.service && systemctl start memcached.service

etcd分布式键值存储数据库

以下操作在控制节点上进行

yum install etcd -y

编辑/etc/etcd/etcd.conf

[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.90.70:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.90.70:2379"
ETCD_NAME="controller"
[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.90.70:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.90.70:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.90.70:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"

添加自启动并重启etcd服务
systemctl enable etcd && systemctl start etcd

keystone

身份认证服务,用于管理身份验证、授权和服务目录,还可以与外部的用户管理系统(LDAP)集成。

每个服务(service)可以有一个或多个端点(endpoint)
端点有三种类型:admin管理员、public公共、internal内部

server:中央服务器,RESTful API接口提供认证和授权服务
drivers:驱动程序,访问数据库(SQL/LDAP)身份信息
modules:中间件模块,获取用户请求发送到server中进行授权

以下操作在控制节点上进行

mysql -uroot -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';

yum install openstack-keystone -y

编辑 /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:123456@controller/keystone
[token]
provider = fernet

初始化keystone数据库
由于python的orm需要对数据库进行初始化来生成数据库表结构
su -s /bin/sh -c "keystone-manage db_sync" keystone

初始化fernet key库

#创建keystone用户和keystone组
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
#对keystone用户和组进行授权
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

身份认证服务,使用默认域default,管理员admin,密码123456

keystone-manage bootstrap --bootstrap-password 123456 --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

http

yum install httpd mod_wsgi -y

编辑/etc/httpd/conf/httpd.conf
ServerName controller

创建文件链接
此处为软连接,不占用磁盘空间
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

添加自启动并重启http服务
systemctl enable httpd.service && systemctl start httpd.service

域、项目、用户、角色

以下操作在控制节点上进行

登录admin账户
这里的账户密码与keystone-manage bootstrap设置的密码一致

export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

创建example域
演示域
openstack domain create --description "Domain" example

创建service项目,该项目包含每项服务的唯一用户
服务项目
openstack project create --domain default --description "Service Project" service

创建demo项目
演示项目
openstack project create --domain default --description "Demo Project" demo

创建demo用户
普通用户
openstack user create --domain default --password-prompt demo

创建user角色
openstack role create user

添加user角色到demo项目和demo用户
openstack role add --project demo --user demo user

验证

取消环境变量
unset OS_AUTH_URL OS_PASSWORD
模拟用户登录获取token
#admin用户
openstack --os-auth-url http://controller:35357/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
#普通用户
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name demo --os-username demo token issue

openrc客户端环境脚本

openstack支持简单的客户端环境脚本,即openrc文件
这些脚本一般包含所有客户端选项,也支持定制选项

此处openrc脚本存放于根目录下

以下操作在控制节点上进行

创建并编辑admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

创建并编辑demo-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

验证

. admin-openrc
openstack token issue
. demo-openrc
openstack token issue

glance

使用户能够发现、注册和检索虚拟机镜像

glance-api:RESTful API,接受磁盘或服务器映像的API请求
glance-registry:存储、处理和检索镜像元数据
database:存储镜像元数据的数据库(MySQL/SQLite)
storage repository for image files:镜像文件的存储库,支持各类存储库类型,包括常规文件系统、Object Storage、RADOS块设备、VMware数据存储和HTTP
metadata definition service:元数据定义服务,供应商、管理员、服务和用户的通用API,定制不同类型的资源(镜像、开发、卷等)的关键字、描述、约束等

以下操作在控制节点上进行

mysql -uroot -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY '123456';

. admin-openrc

创建glance用户
openstack user create --domain default --password-prompt glance

admin角色添加到glance用户和项目中
openstack role add --project service --user glance admin

创建image服务
openstack service create --name glance --description "OpenStack Image" image

创建镜像服务API端点

openstack endpoint create --region RegionOne  image public http://controller:9292
openstack endpoint create --region RegionOne  image internal http://controller:9292
openstack endpoint create --region RegionOne  image admin http://controller:9292

yum install openstack-glance -y

编辑/etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http  #可用存储类型
default_store = file  #默认存储类型
filesystem_store_datadir = /var/lib/glance/images/  #存储默认地址:用于存储映像文件

编辑/etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone

同步数据库
su -s /bin/sh -c "glance-manage db_sync" glance

systemctl enable openstack-glance-api.service openstack-glance-registry.service && systemctl start openstack-glance-api.service openstack-glance-registry.service

验证
. admin-openrc

下载镜像cirrors
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

使用QCOW2磁盘格式,裸容器格式和公开可见性将图像上传到Image服务,以便所有项目都可以访问它
openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

查看验证
openstack image list

nova

compute服务,托管和管理云计算系统。通过keystone进行身份验证,使用glance的映像服务,与dashboard进行交互

nova-api
nova-api-metadata:接受实例元数据请求
nova-compute:接受队列动作并执行
nova-placement-api:跟踪库存等使用情况
nova-schedule:调度计算节点
nova-conductor:调度nova-compute和数据库交互
nova-consoleauth:为控制台提供的用户授权令牌
nova-novncproxy:VNC连接正在运行的实例的代理,支持基于浏览器的novnc客户端
nova-spicehtml5proxy:SPICE连接正在运行的实例代理,支持基于浏览器的HTML5客户端
nova-xvpvncproxy:VNC连接正在运行的实例代理,支持openstack的java客户端
queue:AMQP消息队列(RabbitMQ)
database:存储项目、网络、实例类型、实例等的构建时间和运行状态

以下操作在控制节点上进行

mysql -uroot -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';

. admin-openrc

创建nova用户
openstack user create --domain default --password-prompt nova

添加nova为service项目的admin角色
openstack role add --project service --user nova admin

创建compute服务
openstack service create --name nova --description "OpenStack Compute" compute

创建compute端点

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

placement
placement主要用于跟踪服务,在stein版本中已经独立成为一个单独组件

创建placement用户
openstack user create --domain default --password-prompt placement

添加placement为service项目的admin角色
openstack role add --project service --user placement admin

创建Placement服务
openstack service create --name placement --description "Placement API" placement

创建Placement端点

openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

编辑 /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata  #启用计算和元数据api
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.90.70
use_neutron = True  #启用网络支持
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]  #启用vnc,使用控制节点的管理网络配置VNC代理
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance] #连接glance服务
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

由于软件包bug,启用对placement API的访问
编辑/etc/httpd/conf.d/00-nova-placement-api.conf

#在ErrorLog之后添加
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

systemctl restart httpd

初始化nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

初始化nova数据库
su -s /bin/sh -c "nova-manage db sync" nova

systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service && systemctl start openstack-nova-api.service  openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

验证
nova-manage cell_v2 list_cells

以下操作在计算节点上进行

yum install openstack-nova-compute -y

编辑/etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.90.71
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

systemctl enable libvirtd.service openstack-nova-compute.service && systemctl start libvirtd.service openstack-nova-compute.service

编辑/etc/nova/nova.conf
virt_type = qemu

以下操作在控制节点上进行

openstack compute service list --service nova-compute

手动发现计算节点
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

编辑/etc/nova/nova.conf
设置自动发现计算节点时间间隔

[scheduler]
discover_hosts_in_cells_interval = 300

验证
. admin-openrc

openstack compute service list

openstack catalog list

openstack image list

nova-status upgrade check

neutron

以下操作在控制节点上进行

mysql -uroot -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';

. admin-openrc

openstack user create --domain default --password-prompt neutron

添加admin角色为neutron用户
openstack role add --project service --user neutron admin

创建neutron服务
openstack service create --name neutron --description "OpenStack Networking" network

openstack endpoint create --region RegionOne  network public http://controller:9696
openstack endpoint create --region RegionOne  network internal http://controller:9696
openstack endpoint create --region RegionOne  network admin http://controller:9696

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

编辑/etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

配置二层插件
编辑/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vlan]
network_vlan_ranges=1:1000
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true

配置Linux网桥
编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens35  #此处为外部网络网卡
[vxlan]
enable_vxlan = true
local_ip = 192.168.91.70
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

编辑/usr/lib/sysctl.d/00-system.conf
使Linux操作系统内核支持网桥过滤器

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

sysctl -p

编辑/etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

配置DHCP服务
编辑 /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

配置metadata
编辑/etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456

配置计算服务使用网络服务
编辑/etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

初始化数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

systemctl restart openstack-nova-api.service

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service && systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl enable neutron-l3-agent.service && systemctl start neutron-l3-agent.service

以下操作在计算节点上进行

yum install openstack-neutron-linuxbridge ebtables ipset
编辑/etc/neutron/neutron.conf

在[database]部分,注释掉任何connection选项,因为计算节点不直接访问数据库

[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:openstack@controller
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens35
[vxlan]
enable_vxlan = true
local_ip=192.168.91.71
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

编辑/etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

编辑/usr/lib/sysctl.d/00-system.conf

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

sysctl -p

systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service && systemctl start neutron-linuxbridge-agent.service

horizon

以下操作在控制节点上进行

yum install openstack-dashboard -y

编辑/etc/openstack-dashboard/local_settings

#接受所有主机
ALLOWED_HOSTS = ["*"]
#设置API版本
OPENSTACK_API_VERSIONS = {
#    "data-processing": 1.1,
    "identity": 3,
    "image": 2,
    "volume": 2,
    "compute": 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
    },
}
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_ipv6': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
    ...
}

编辑/etc/httpd/conf.d/openstack-dashboard.conf
添加WSGIApplicationGroup %{GLOBAL}

systemctl restart httpd.service memcached.service

登录

http://192.168.90.70/dashboard
default-admin-123456(管理员)
default-demo-123456(普通用户)

  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值