CentOS 7部署OpenStack-Train

前置条件

系统和版本

CentOS版本和OpenStack版本对应关系, 参考阿里云镜像站

CentOS 7
  - https://mirrors.aliyun.com/centos/7/cloud/x86_64/
  - 支持到Train版本
  
CentOS 8
  - https://mirrors.aliyun.com/centos/8/cloud/x86_64/
  - 支持版本, 从Train版本到Victoria版本

系统环境要求

1. 卸载firewall服务
2. 卸载NetworkManager服务
3. 关闭selinux
4. 主机名互通- host文件
5. 使用ntp服务器
6. /etc/resolv.conf取消search开头的行
7. CentOS 7操作系统最低配置要求:
   - Controller Node:
     - 内存: 8G以上
     - 磁盘: 40G以上
   - Compute Note:
     - 内存: 4G以上
     - 磁盘: 40G * 2 (compute node需要充当 storage node, 另外使用一块存储盘)

最小安装所需组件

来源官网: https://docs.openstack.org/install-guide/openstack-services.html

At a minimum, you need to install the following services. Install the services in the order specified below:

This example architecture differs from a minimal production architecture as follows:

  • Networking agents reside on the controller node instead of one or more dedicated network nodes.
  • Overlay (tunnel) traffic for self-service networks traverses the management network instead of a dedicated network.

部署规划

主机名IP地址组件
controller172.16.20.80Keystone, Glance, Placement, Nova, Neutron, Horizon, Cinder
compute1172.16.20.81Nova, Neutron, Cinder
compute2172.16.20.82Nova, Neutron, Cinder

配置系统环境

  • 所有节点

参考官网: https://docs.openstack.org/install-guide/environment.html

配置yum源

# 更换阿里云Base源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

# 安装openstack官方源
yum install centos-release-openstack-train -y

yum clean all
yum makecache

安装常用工具

yum install wget rsync vim net-tools chrony -y
yum install python-openstackclient openstack-selinux -y

关闭防火墙

openstack会接管系统防火墙配置, 所以要禁用firewalld服务

systemctl stop firewalld
systemctl disable firewalld

关闭NetworkManager

由于openstack网络交由neutron组件管理, 会不兼容主机自带的NetworkManager, 所以要将其卸载

systemctl stop NetworkManager
systemctl disable NetworkManager

关闭Selinux

# 临时关闭
setenforce 0

# 永久关闭
vim /etc/selinux/config
修改
SELINUX=disabled

设置主机名

# 根据规划配置, controller节点示例:
hostnamectl set-hostname controller

配置hosts

vim /etc/hosts
加入
172.16.20.80	controller
172.16.20.81	compute1
172.16.20.82	compute2

# 将hosts同步其余节点

配置ntp服务

controller节点配置chronyd服务, nova1和nova2节点为客户端

  • controller节点

    cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    vim /etc/chrony.conf
    # 配置如下
    server ntp1.aliyun.com iburst
    
    allow 172.16.20.0/24
    local stratum 10
    
    systemctl restart chronyd 
    
  • nova节点

    cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    vim /etc/chrony.conf
    # 配置如下
    server controller
    
    systemctl restart chronyd
    
    # ntp同步验证:
    chronyc sources -v
    

部署OpenStack基础服务

参考文档: https://docs.openstack.org/install-guide/environment.html

安装OpenStack官方源

  • 所有节点

参考官方文档: https://docs.openstack.org/install-guide/environment-packages.html

安装Mysql数据库

  • controller节点
yum install mariadb mariadb-server python2-PyMySQL -y

修改my.cnf

vim /etc/my.cnf.d/mariadb-server.cnf
修改如下内容
[mysqld]
bind-address = 172.16.20.80

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动mysql

systemctl enable mariadb.service
systemctl start mariadb.service

配置mysql账号口令

mysql_secure_installation

# 直接回车, 设置新密码
账号: root 
口令: mysql!#Aa123456

# 验证
mysql -uroot -p'mysql!#Aa123456' -e 'show databases;'

安装Rabbitmq队列服务

  • controller节点

安装队列服务

yum install rabbitmq-server -y

启动队列服务

systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

创建队列用户

rabbitmqctl add_user openstack Aa123456

为openstack用户授权

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

# 查看
rabbitmqctl list_users 

启动rabbitmq管理界面

rabbitmq-plugins enable rabbitmq_management

#查看
rabbitmq-plugins list

# 访问地址
http://172.16.20.80:15672
PS: 
默认管理员帐号口令: guest

安装Mencache缓存

  • controller节点
yum install memcached python3-memcached -y

修改memcached配置

vim /etc/sysconfig/memcached 
OPTIONS="-l 127.0.0.1,::1,controller"

启动缓存服务

systemctl enable memcached.service
systemctl start memcached.service

安装ETCD服务

  • 单节点模式, controller节点
yum install etcd -y

配置ETCD

cat > /etc/etcd/etcd.conf << EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://172.16.20.80:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.16.20.80:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.20.80:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.20.80:2379"
ETCD_INITIAL_CLUSTER="controller=http://172.16.20.80:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

启动etcd服务

systemctl enable etcd
systemctl start etcd

部署OpenStack平台

Keystone

  • controller节点

参考官方文档: https://docs.openstack.org/keystone/train/install/keystone-install-rdo.html

Mysql数据库配置

mysql -u root -p'mysql!#Aa123456'

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'MysqlKeyst0ne';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'MysqlKeyst0ne';

安装Keystone服务

yum install openstack-keystone httpd mod_wsgi-y

修改keystone配置文件

cat > /etc/keystone/keystone.conf << 'EOF'
[database]
connection = mysql+pymysql://keystone:MysqlKeyst0ne@controller/keystone

[token]
provider = fernet
EOF

同步Keystone配置到数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

# 验证
mysql -ukeystone -p'MysqlKeyst0ne'
use keystone;
show tables;

创建Keystone-manage

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

配置keystone-manage bootstrap, 注册keystone API

keystone-manage bootstrap --bootstrap-password Keyst0nePwd \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne
  
# 帐号: admin
# 口令: Keyst0nePwd

服务启动

配置Apache服务

vim /etc/httpd/conf/httpd.conf
修改如下
ServerName controller

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动httpd服务

systemctl enable httpd.service
systemctl start httpd.service

创建admin变量脚本

cat > admin-openrc << 'EOF'
# 配置命令提示符
export PS1="(Keystone-admin) [\u@\h \W]\$ "

# 配置Keystone变量
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Keyst0nePwd
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

这里显示的这些值是在keystone-manage bootstrap中创建的默认值, 替换ADMIN_PASS为keystone-manage bootstrap中的密码

加载变量

source admin-openrc
或者
. admin-openrc

创建项目和服务

创建service 项目, 使用默认域

openstack project create --domain default --description "Service Project" service

创建自定义项目流程

创建新域

openstack domain create --description "An Example Domain" mydomain

创建demo项目, 名称为myproject

openstack project create --domain default --description "Demo Project" myproject

创建用户, mydomain域

openstack user create --domain mydomain --password-prompt myuser

创建myrole角色

openstack role create myrole

项目-用户-角色绑定, 将myrole角色添加到myproject项目和myuser用户

openstack role add --project myproject --user myuser myrole

客户端环境变量脚本

cat > myproject-openrc << EOF
# 配置命令提示符
export PS1="(Keystone-myproject) [\u@\h \W]\$ "

export OS_PROJECT_DOMAIN_NAME=mydomain
export OS_USER_DOMAIN_NAME=mydomain
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

# 加载环境变量
. myproject-openrc

//DEMO_PASS为openstack user create 创建myuser时设定的密码

Keystone常用命令

# 修改密码
openstack user set --password newpassword user

部署验证

  • 常用查看命令
openstack project list
openstack service list
openstack user list
  • admin用户,请求身份验证令牌
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
  • myuser用户,请求身份验证令牌
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue

Glance

  • controller节点

参考官方文档: https://docs.openstack.org/glance/train/install/install-rdo.html

Mysql数据库配置

mysql -u root -p'mysql!#Aa123456'

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'MysqlG1ance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'MysqlG1ance';

注册Glance到Keystone

载入admin环境变量

. admin-openrc

创建glance用户

openstack user create --domain default --password-prompt glance

# 口令: G1ancePwd

将glance用户绑定到service项目的admin角色

openstack role add --project service --user glance admin

创建glance服务

openstack service create --name glance --description "OpenStack Image" image

配置image服务后端API接口

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

验证

openstack service list
openstack user list
openstack endpoint list

安装Glance服务

安装

yum install openstack-glance -y

配置

cat > /etc/glance/glance-api.conf << 'EOF'
[database]
connection = mysql+pymysql://glance:MysqlG1ance@controller/glance

[keystone_authtoken]
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = G1ancePwd

[paste_deploy]
flavor = keystone

# 配置本地镜像存储位置
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
EOF

同步glance配置到数据库

su -s /bin/sh -c "glance-manage db_sync" glance

服务启动

启动Glance服务

systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service

部署验证

. admin-openrc

wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

glance image-create --name "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility=public
  
glance image-list

Placement

作用: 计算服务的管理部分, 从Nova服务中分离而来

  • controller节点

参考官方文档: https://docs.openstack.org/placement/train/install/install-rdo.html

Placement作用是收集各个node节点的可用资源,把node节点的资源统计写入到mysql,Placement服务会被nova scheduler服务进行调用; 由于数据存放在mysql中, 属于无状态应用, 可多节点部署

Mysql数据库配置

mysql -u root -p'mysql!#Aa123456'

CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'MysqlP1acement';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'MysqlP1acement';

注册Placement到Keystone

载入admin环境变量

. admin-openrc

创建Placement用户

openstack user create --domain default --password-prompt placement

口令: P1acementPwd

将placement用户绑定到service项目的admin角色

openstack role add --project service --user placement admin

创建Placement API接口服务

openstack service create --name placement --description "Placement API" placement

创建Placement API接口

openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

验证

openstack service list
openstack user list
openstack endpoint list

安装Placenment服务

安装

yum install openstack-placement-api -y

配置

cat > /etc/placement/placement.conf << 'EOF'
[placement_database]
connection = mysql+pymysql://placement:MysqlP1acement@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = P1acementPwd
EOF
  1. 替换PLACEMENT_PASS为您placement在身份服务中为用户选择的密码 。
  2. user_name, password, project_domain_name and user_domain_name等参数与placement在keystone中配置的一致

同步Placement配置到数据库

su -s /bin/sh -c "placement-manage db sync" placement

服务启动

重启httpd, 载入Placement服务

systemctl restart httpd

部署验证

. admin-openrc

placement-status upgrade check

Nova

控制节点

  • controller节点

参考官方文档: https://docs.openstack.org/nova/train/install/controller-install-rdo.html

Mysql数据库配置
mysql -u root -p'mysql!#Aa123456'

# 创建nova相关数据库
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

# 创建数据库用户并授权
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'MysqlN0vaPwd';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'MysqlN0vaPwd';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'MysqlN0vaPwd';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'MysqlN0vaPwd';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'MysqlN0vaPwd';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'MysqlN0vaPwd';
注册Nova到Keystone

载入Keystone环境

. admin-openrc

创建nova用户

openstack user create --domain default --password-prompt nova

# 口令: N0vaPwd

将nova用户绑定到service项目的admin角色

openstack role add --project service --user nova admin

创建nova服务

openstack service create --name nova --description "OpenStack Compute" compute

创建nova API接口

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
安装Nova服务

安装

yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

配置

cat > /etc/nova/nova.conf << 'EOF'
[DEFAULT]
my_ip = 172.16.20.80
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:Aa123456@controller:5672/

[api_database]
connection = mysql+pymysql://nova:MysqlN0vaPwd@controller/nova_api

[database]
connection = mysql+pymysql://nova:MysqlN0vaPwd@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
# keystone配置
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = N0vaPwd

[vnc]
enabled = true
# VNC配置
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
# 关联glance
api_servers = http://controller:9292

[placement]
# 关联placement
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = P1acementPwd

[oslo_concurrency]
# 本地缓存路径配置
lock_path = /var/lib/nova/tmp
EOF
同步Nova配置到数据库
# 登录环境变量
. admin-openrc

# nova-api
su -s /bin/sh -c "nova-manage api_db sync" nova

# 注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

# Create the cell1 cell, warning可忽略
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

# nova
su -s /bin/sh -c "nova-manage db sync" nova

# 验证
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
启动服务
systemctl enable \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service
systemctl start \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service
部署验证
openstack project list
openstack service list
openstack user list

计算节点

安装Nova服务
  • 计算节点

参考官方文档: https://docs.openstack.org/nova/train/install/compute-install-rdo.html

安装

yum install openstack-nova-compute -y

配置

cat > /etc/nova/nova.conf << 'EOF'
[DEFAULT]
# 仅开启计算和元数据API
enabled_apis = osapi_compute,metadata
# 队列配置
transport_url = rabbit://openstack:Aa123456@controller
my_ip = 172.16.20.81

[api]
auth_strategy = keystone

[keystone_authtoken]
# keystone配置
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = N0vaPwd

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://172.16.20.80:6080/vnc_auto.html


[glance]
# glance配置
api_servers = http://controller:9292


[oslo_concurrency]
# 本地存储配置
lock_path = /var/lib/nova/tmp


[placement]
# placement配置
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = P1acementPwd

[libvirt]
virt_type = qemu
服务启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service

如果启动过程中, 日志提示资源请求失败, 类似如下

ERROR nova.compute.manager nova.exception.ResourceProviderRetrievalFailed: Failed to get resource provider with UUID 7f58ca50-e98e-41b1-aa8b-9d227ff75aa2

解决方法:

在controller节点, 配置httpd
vim /etc/httpd/conf.d/00-placement-api.conf
在placement-api别名下
Alias /placement-api /usr/bin/placement-api
<Location /placement-api>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
添加
<Directory /usr/bin>
<IfVersion >= 2.4>
   Require all granted
</IfVersion>
<IfVersion < 2.4>
   Order allow,deny
   Allow from all
</IfVersion>
</Directory>

systemctl restart httpd

添加计算节点到cell库

  • controller节点
# 载入环境变量
. admin-openrc

# 查看nova节点
openstack compute service list --service nova-compute

# 发现nova节点
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

新注册节点需要执行节点发现, 否则节点无法上线, 也可以在controller节点配置自动发现

vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

部署验证

  • 控制节点
. admin-openrc

openstack compute service list

openstack catalog list

openstack image list

nova-status upgrade check

Neutron

参考官方文档: https://docs.openstack.org/neutron/train/install/controller-install-rdo.html

控制节点

Mysql数据库配置
mysql -u root -p'mysql!#Aa123456'

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'MysqlNeuto0n';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'MysqlNeuto0n';
注册Neutron到Keystone

加载admin变量

. admin-openrc

创建neutron用户

openstack user create --domain default --password-prompt neutron

# 口令: Neutr0nPwd

将neutron用户绑定到service项目的admin角色

openstack role add --project service --user neutron admin

创建neutron服务

openstack service create --name neutron --description "OpenStack Networking" network

network为Type类型

为neutron服务创建API接口

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
安装Neutron服务

https://docs.openstack.org/neutron/train/install/controller-install-option2-rdo.html

  • controller节点

安装

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

配置neutron

cat > /etc/neutron/neutron.conf << 'EOF'
[database]
connection = mysql+pymysql://neutron:MysqlNeuto0n@controller/neutron

[DEFAULT]
# 启动ml2插件, 路由器服务和重叠IP地址
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true

# 配置队列
transport_url = rabbit://openstack:Aa123456@controller

# 配置Keystone
auth_strategy = keystone

# 配置组网,通知计算网络拓扑变化
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Neutr0nPwd


[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = N0vaPwd


[oslo_concurrency]
# 配置本地缓存路径
lock_path = /var/lib/neutron/tmp
EOF

配置ML2插件

cat > /etc/neutron/plugins/ml2/ml2_conf.ini << 'EOF'
[ml2]
# 开启flat(直连), VLAN, and VXLAN
type_drivers = flat,vlan,vxlan

#  启用VXLAN自助服务网络
tenant_network_types = vxlan

# 开启网桥类型
mechanism_drivers = linuxbridge,l2population

# 启用端口安全扩展驱动程序
extension_drivers = port_security

[ml2_type_flat]
# 将虚拟化网络作为直连网络
flat_networks = provider

[ml2_type_vxlan]
# 配置自助服务网络的VXLAN网络标识范围
vni_ranges = 1:1000

[securitygroup]
# 启用ipset,提高安全组规则的效率
enable_ipset = true
EOF
Warning

After you configure the ML2 plug-in, removing values in the type_drivers option can lead to database inconsistency.
配置网桥代理
Linux bridge agent
cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini << 'EOF'
[linux_bridge]
physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = true
local_ip = 172.16.20.80
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF
  1. 将PROVIDER_INTERFACE_NAME替换为本地网卡名称

  2. 替换OVERLAY_INTERFACE_IP_ADDRESS为处理覆盖网络的底层物理网络接口的 IP 地址。示例架构使用管理接口将流量隧道传输到其他节点。因此,替换OVERLAY_INTERFACE_IP_ADDRESS为控制器节点的管理 IP 地址

配置controller节点内核参数

vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# 永久加载模块
cat > /etc/modules-load.d/neutron-bridge.conf <<EOF 
br_netfilter
EOF
## 配置开机启动
systemctl restart systemd-modules-load
systemctl enable systemd-modules-load


sysctl -p
layer-3 agent
cat > /etc/neutron/l3_agent.ini << 'EOF'
[DEFAULT]
interface_driver = linuxbridge
EOF
DHCP agent
cat > /etc/neutron/dhcp_agent.ini << 'EOF'
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
EOF
metadata agent配置
  • controller节点
cat > /etc/neutron/metadata_agent.ini << 'EOF'

[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = MetaAa123456Pwd
EOF

配置nova服务

vim /etc/nova/nova.conf
加入如下
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Neutr0nPwd
service_metadata_proxy = true
metadata_proxy_shared_secret = MetaAa123456Pwd

Replace METADATA_SECRET with the secret you chose for the metadata proxy.

同步Neutron配置到数据库
# 网络服务初始化脚本需要一个/etc/neutron/plugin.ini指向 ML2 插件配置文件的符号链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
服务启动
systemctl restart openstack-nova-api.service

systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
  
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

计算节点

参考官方文档: https://docs.openstack.org/neutron/train/install/compute-install-rdo.html

安装Neutron服务

安装

yum install openstack-neutron-linuxbridge ebtables ipset -y

配置

cat > /etc/neutron/neutron.conf << 'EOF'
[DEFAULT]
transport_url = rabbit://openstack:Aa123456@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = Neutr0nPwd

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
EOF

配置linuxbridge

cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini << 'EOF'
[linux_bridge]
physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = true
local_ip = 172.16.20.81
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF

配置内核参数

vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# 永久加载模块
cat > /etc/modules-load.d/neutron-bridge.conf <<EOF 
br_netfilter
EOF
## 配置开机启动
systemctl restart systemd-modules-load
systemctl enable systemd-modules-load

sysctl -p

# 需要加载内核, 永久加载内核方法同控制节点配置相同

Nova计算节点配置Neutron网络

vim /etc/nova/nova.conf
加入如下
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = Neutr0nPwd
服务启动

重启nova

systemctl restart openstack-nova-compute.service

启动网桥代理

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

Cinder

控制节点

参考官方文档: https://docs.openstack.org/cinder/train/install/cinder-controller-install-rdo.html

Mysql数据库配置
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'MysqlC1nder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'MysqlC1nder';
注册Cinder到Keystone

载入环境变量

. admin-openrc

创建cinder用户

openstack user create --domain default --password-prompt cinder

# 口令: C1nderPwd

将cinder用户绑定到service项目的admin角色

openstack role add --project service --user cinder admin

创建cinderv2和cinderv3服务

openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

为cinder服务创建API接口

openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
安装Cinder服务

安装

yum install openstack-cinder -y

配置

cat > /etc/cinder/cinder.conf << 'EOF'
[database]
connection = mysql+pymysql://cinder:MysqlC1nder@controller/cinder

[DEFAULT]
my_ip = 172.16.20.80
transport_url = rabbit://openstack:Aa123456@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = C1nderPwd

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
EOF
同步Cinder配置到数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
服务启动

配置nova

vim /etc/nova/nova.conf
加入如下
[cinder]
os_region_name = RegionOne

重启nova服务

systemctl restart openstack-nova-api.service

启动cinder服务

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

存储节点

参考官方文档: https://docs.openstack.org/cinder/train/install/cinder-storage-install-rdo.html

  • 这里用计算节点
安装LVM工具
yum install lvm2 device-mapper-persistent-data -y

systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
创建LVM卷组
  • 这里已加入硬盘, 驱动器sdb
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

vim /etc/lvm/lvm.conf 
配置如下
devices {
        filter = [ "a/sdb/", "r/.*/"]
        ...
}

如果还有sdc, sdd, 直接逗号分割, 结尾为r/.*/

安装cinder
yum install openstack-cinder targetcli python-keystone -y
配置cinder
cat > /etc/cinder/cinder.conf << 'EOF'
[database]
connection = mysql+pymysql://cinder:MysqlC1nder@controller/cinder

[DEFAULT]
my_ip = 172.16.20.81
transport_url = rabbit://openstack:Aa123456@controller
auth_strategy = keystone
glance_api_servers = http://controller:9292
enabled_backends = lvm

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = C1nderPwd

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
EOF

volume_group参数与lvm创建的vg卷组名称对应, 如果有多个lvm卷组

1. enabled_backends = lvm1, lvm2
2. 
[lvm1]
...
[lvm2]
...
启动cinder
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
部署验证
. admin-openrc
openstack volume service list

备份节点

安装cinder
yum install openstack-cinder
配置cinder
[DEFAULT]
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = SWIFT_URL
SWIFT_URL为对象存储服务的 URL, 可通过controller节点使用命令查看
openstack catalog show object-store
启动服务
systemctl enable openstack-cinder-backup.service
systemctl start openstack-cinder-backup.service

Horizon

  • controller节点

参考官方文档: https://docs.openstack.org/horizon/train/install/install-rdo.html

安装Horizon

yum install openstack-dashboard -y

配置Horizon

vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"

OPENSTACK_NEUTRON_NETWORK = {
    'enable_auto_allocated_network': False,
    'enable_distributed_router': False,
    'enable_fip_topology_check': True,
    'enable_ha_router': False,
    'enable_ipv6': True,
    # TODO(amotoki): Drop OPENSTACK_NEUTRON_NETWORK completely from here.
    # enable_quotas has the different default value here.
    'enable_quotas': True,
    'enable_rbac_policy': True,
    'enable_router': True,

    'default_dns_nameservers': [],
    'supported_provider_types': ['*'],
    'segmentation_id_range': {},
    'extra_provider_types': {},
    'supported_vnic_types': ['*'],
    'physical_networks': [],

}


TIME_ZONE = "Asia/Shanghai"

处理bug, 在httpd配置中加入 WSGIApplicationGroup %{GLOBAL}

vim /etc/httpd/conf.d/openstack-dashboard.conf

加入
WSGIApplicationGroup %{GLOBAL}

重启服务

systemctl restart httpd.service memcached.service

验证Horizon

http://controller/dashboard

排错

  1. 默认httpd的dashboard配置文件无法访问Horizon主页, 需要重新生成httpd的dashboard配置文件
# 建立策略的软链接
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf

# 重新生成apache配置文件
cd /usr/share/openstack-dashboard
python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf

systemctl restart httpd.service memcached.service
  1. 解决登录horizon后无法进入身份管理
vim /etc/openstack-dashboard/local_settings
最下方加入
WEBROOT = '/dashboard/'

vim /etc/httpd/conf.d/openstack-dashboard.conf
修改如下
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
Alias /dashboard/static /usr/share/openstack-dashboard/static

systemctl restart httpd.service memcached.service

常用操作

Cloud镜像下载

CentOS

https://cloud.centos.org/centos/

Ubuntu

http://cloud-images.ubuntu.com/releases/

修改密码功能

操作方法

创建实例--> 配置--> 添加如下脚本(Aa123456为默认密码), 并勾选配置驱动
#!/bin/bash
echo Aa123456 | passwd --stdin root
sed -i 's#^PasswordAuthentication.*$#PasswordAuthentication\ yes#g' /etc/ssh/sshd_config
systemctl restart sshd

网络配置流程

示例:

  • 测试环境, 暂时用管理段IP代模拟外网IP
网络拓扑:
WAN网络			 -- 172.16.20.0/24
租户LAN网络			-- 10.10.0.0/16

配置流程:
admin管理员
- 创建WAN网络
  登录openstack--> 管理员--> 网络--> 创建网络:
  名称: 自定义, 如wan
  项目: 关联项目, 这里选service
  类型: flat(直连)
  物理网络: provider(对应neutron中配的flat_networks)
  其余复选框选项: 启用管理员状态、共享的、外部网络

- 配置租户WAN网IP(浮动IP)
  登录openstack--> 项目--> 网络--> 在wan网actions中选择创建子网:
  子网名称: 自定义, 如wan_subnet
  网络地址: 172.16.20.0.24
  IP版本: IPV4
  网关: 172.16.20.1

  下一步, 子网详情
  DHCP地址池: 勾选激活
  分配地址池: 172.16.20.150,172.16.20.157			//这里模拟外网IP
  DNS: 自定义
  创建网络

租户管理者
- 创建租户网络
  - 创建LAN网络
    登录openstack--> 项目--> 网络--> 创建网络:
    名称: 自定义, 如project_lan
    复选框选项: 启用管理员状态、创建子网

    下一步, 创建子网
    名称: 自定义, 如project_lan_subnet
    网络地址: 10.10.0.0/16
    IP版本: IPV4
    网关IP: 10.10.0.1

    下一步, DHCP配置
    激活DHCP: 勾选
    分配地址池: 10.10.0.2,10.10.0.254
    DNS服务器: 自定义
    创建网络

  - 创建租户LAN网路由
    登录openstack--> 项目--> 网络-->网络拓扑-->
    - 新建路由
      路由名称: project_route
      启用管理员状态: 勾选
      外部网络: 选择之前定义的WAN网络
      新建路由
    - 配置路由
      鼠标放置新建好的路由接口:
      选择添加接口
      子网: 选择自定义子网网络
      IP地址: 填写自定义子网网络的网关
      提交

# 到此, 最简单的内外网模式配置完毕

实例热迁移

  • 这里以NFS作为共享存储为例, 真是环境可用分布式存储

前置条件

1. 确保所有nova用户uid和gid相同, 
- 查看
  id nova
- 修改
  usermod -u 162 nova
  groupmod -g 162 nova
NFS共享目录配置
  • controller节点
yum install nfs-utils -y
mkdir -pv /var/lib/nova/instances
chmod o+x /var/lib/nova/instances

vim /etc/exports
# 配置NFS共享目录
/var/lib/nova/instances *(rw,sync,fsid=0,no_root_squash)

systemctl restart nfs-server.service
  • compute节点
yum install nfs-utils -y
# 挂载nfs
vim /etc/fstab
# 配置如下
controller:/ /var/lib/nova/instances nfs4 defaults 0 0

# 自动检测配置文件并挂载
mount -a -v
计算节点配置热迁移
  • compute节点
vim /etc/nova/nova.conf
# 增加如下配置
[DEFAULT]
instances_path=/var/lib/nova/instances

[vnc]
server_listen=0.0.0.0

vim /etc/libvirt/libvirtd.conf
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
listen_addr = "0.0.0.0"
auth_tcp = "none"

vim /etc/sysconfig/libvirtd
LIBVIRTD_ARGS="--listen"

# 重启nova服务
systemctl restart libvirtd.service openstack-nova-compute.service
热迁移验证

新建实例, 登录admin管理员, 管理员–> 示例–> 选择新创建的实例, 进行热迁移验证

  • 7
    点赞
  • 40
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值