OpenStack----T版----分布式部署

OpenStack多节点部署
一、资源配置
|操作系统|主机名|配置|磁盘|IP|数量| |------|------|----|----|–|—| |CentOS7.3|controller|2C4G|50G|10.1.1.101|1| |CentOS7.3|compute01|2C4G|50G|10.1.1.102|1| |CentOS7.3|block01|2C4G|100G|10.1.1.103|1|

二、前置环境
2.1、基础环境准备
查看CPU是否支持虚拟化

适用于Intel

cat /proc/cpuinfo | grep vmx

适用于AMD

cat /proc/cpuinfo | grep smv
关闭NetworkManager
systemctl stop NetworkManager
systemctl disable NetworkManager
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
关闭内核安全机制
sed -i “s/.SELINUX=./SELINUX=disabled/g” /etc/selinux/config
修改主机名

主机名自定义我这里三台的主机名按照资源配置里进行设置

hostnamectl set-hostname controller
安装时间同步服务

安装chrony服务

yum -y install chrony

启动chronyd服务

systemctl start chronyd

设置为开机自启动

systemctl enable chronyd

查看确认时间同步

chronyc sources -v
配置hosts
cat >> /etc/hosts << EOF
10.1.1.101 controller
10.1.1.102 compute
10.1.1.103 block
EOF
重启服务器
reboot
三、安装OpenStack
3.1、安装OpenStack源
在controller节点和compute01节点安装并修改OpenStack的train版本仓库源,同时安装OpenStack客户端和openstack-selinux安装包
yum -y install centos-release-openstack-train
yum -y install python-openstackclient
yum -y install openstack-selinux
yum -y install openstack-utils
3.2、在controller节点安装数据库
执行如下命令,安装MariaDB
yum -y install mariadb mariadb-server python2-PyMySQL
添加MySQL子配置文件
cat > /etc/my.cnf.d/openstack.cnf << EOF
[mysqld]

绑定controller节点的IP

bind-address = 10.1.1.101

默认存储引擎

default-storage-engine = innodb

每张表独立表空间文件

innodb_file_per_table = on

最大连接数

max_connections = 4096

默认字符集

collation-server = utf8_general_ci
character-set-server = utf8
EOF
启动服务并设置开机自启动
systemctl start mariadb
systemctl enable mariadb
执行MariaDB的安全配置脚本
mysql_secure_installation

有以下输出

[root@controller ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we’ll need the current
password for the root user. If you’ve just installed MariaDB, and
you haven’t set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): // 直接按回车
OK, successfully used password, moving on…

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y
New password: // 设置密码123456
Re-enter new password: // 重复密码123456
Password updated successfully!
Reloading privilege tables…
… Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
… Success!

Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
… skipping.

By default, MariaDB comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y

  • Dropping test database…
    … Success!
  • Removing privileges on test database…
    … Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
… Success!

Cleaning up…

All done! If you’ve completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
3.3、在controller节点安装RabbitMQ
执行如下命令,安装RabbitMQ
yum -y install rabbitmq-server
启动服务并设置开机自启动
systemctl start rabbitmq-server
systemctl enable rabbitmq-server
创建消息队列用户openstack

密码设置为RABBIT_PASS

rabbitmqctl add_user openstack RABBIT_PASS
配置openstack用户授权
rabbitmqctl set_permissions openstack “." ".” “.*”
3.4、在controller节点安装Memcached
执行如下命令,安装Memcached
yum -y install memcached python-memcached
修改配置文件

把/etc/sysconfig/memcached的OPTIONS="-l 127.0.0.1,::1"修改为OPTIONS="-l 127.0.0.1,::1,controller"

sed -i “s/OPTIONS=”-l 127.0.0.1,::1"/OPTIONS="-l 127.0.0.1,::1,controller"/g" /etc/sysconfig/memcached
启动服务并设置开机自启动
systemctl start memcached
systemctl enable memcached
3.5、在controller节点安装Etcd
执行如下命令,安装Memcached
yum -y install etcd
修改配置文件
mv /etc/etcd/etcd.conf /etc/etcd/etcd.conf_bak

cat > /etc/etcd/etcd.conf << EOF
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=“http://10.1.1.101:2380”
ETCD_LISTEN_CLIENT_URLS=“http://10.1.1.101:2379”
ETCD_NAME=“controller”
ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://192.168.137.150:2380”
ETCD_ADVERTISE_CLIENT_URLS=“http://10.1.1.101:2379”
ETCD_INITIAL_CLUSTER=“controller=http://10.1.1.101:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster-01”
ETCD_INITIAL_CLUSTER_STATE=“new”
EOF
启动服务并设置开机自启动
systemctl start etcd
systemctl enable etcd
3.6、在controller节点安装Keystone
在MariaDB创建数据库实例和数据库用户
mysql -uroot -p123456 -e “create database keystone;”
mysql -uroot -p123456 -e “grant all privileges on keystone.* to ‘keystone’@‘localhost’ identified by ‘KEYSTONEDB_PASS’;”
mysql -uroot -p123456 -e “grant all privileges on keystone.* to ‘keystone’@’%’ identified by ‘KEYSTONEDB_PASS’;”
安装配置keystone
yum -y install openstack-keystone httpd mod_wsgi

cp -a /etc/keystone/keystone.conf{,.bak}
grep -Ev “^$|#” /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf

修改配置文件使用的是openstack-config --set与vim编辑原理相同

openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONEDB_PASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet
初始化认证服务数据块
su -s /bin/sh -c “keystone-manage db_sync” keystone
初始化 Fernet keys

Fernet keys 是用于 API token 的安全信息格式。下面命令用于初始化 Fernet keys

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
配置 bootstrap 身份认证服务
keystone-manage bootstrap --bootstrap-password ADMIN_PASS --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
配置http服务
echo “ServerName controller” >> /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl start httpd.service

配置管理员账户环境变量
cat >> ~/.bashrc << EOF
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
EOF

source ~/.bashrc
创建 OpenStack 域、项目、用户及角色

创建service项目

openstack project create --domain default --description “Service Project” service

创建用户

openstack role create user

openstack role list
验证认证服务
openstack token issue
3.7、部署Glance
物料包 centos.qcow2

创建 glance 数据库、用户和表

mysql -uroot -p123456 -e “create database glance;”
mysql -uroot -p123456 -e “grant all privileges on glance.* to ‘glance’@‘localhost’ identified by ‘GLANCE_DBPASS’;”
mysql -uroot -p123456 -e “grant all privileges on glance.* to ‘glance’@’%’ identified by ‘GLANCE_DBPASS’;”
创建 OpenStack 中的 Glance 用户(创建用户前,需要首先执行管理员环境变量脚本)
source ~/.bashrc
openstack user create --domain default --password GLANCE_DBPASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description “OpenStack Image” image
创建镜像服务 API 端点
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
安装 Glance 包
yum install openstack-glance -y
配置glance-api.conf 文件
cp -a /etc/glance/glance-api.conf{,.bak}
cp -a /etc/glance/glance-registry.conf{,.bak}
grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#̲' /etc/glance/g…|#’ /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf

openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_DBPASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
初始化 glance 数据库
su -s /bin/sh -c “glance-manage db_sync” glance

有以下显示

[root@controller ~]# su -s /bin/sh -c “glance-manage db_sync” glance
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1280, u"Name ‘alembic_version_pkc’ ignored for PRIMARY key.")
result = self._query(query)
INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial
INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of ‘images’ table
INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images
INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01
INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01
INFO [alembic.runtime.migration] Running upgrade queens_expand01 -> rocky_expand01, add os_hidden column to images table
INFO [alembic.runtime.migration] Running upgrade rocky_expand01 -> rocky_expand02, add os_hash_algo and os_hash_value columns to images table
INFO [alembic.runtime.migration] Running upgrade rocky_expand02 -> train_expand01, empty expand for symmetry with train_contract01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: train_expand01, current revision(s): train_expand01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database migration is up to date. No migration needed.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images
INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables
INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01
INFO [alembic.runtime.migration] Running upgrade queens_contract01 -> rocky_contract01
INFO [alembic.runtime.migration] Running upgrade rocky_contract01 -> rocky_contract02
INFO [alembic.runtime.migration] Running upgrade rocky_contract02 -> train_contract01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: train_contract01, current revision(s): train_contract01
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.
配置 Glance 相关服务
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service
验证Glance 服务

检查9292端口是否监听

netstat -nlpt | grep 9292

#传测试镜像,并导入 Glance,最后查看是否创建成功
openstack image create “centos” --file centos.qcow2 --disk-format qcow2 --container-format bare --public

查看上传的镜像

openstack image list

查看镜像的物理文件

ls -lh /var/lib/glance/images/

有以下输出

[root@controller ~]# openstack image create “centos” --file centos.qcow2 --disk-format qcow2 --container-format bare --public
±-----------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
±-----------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | 72c3581cc6245773676adadde0085106 |
| container_format | bare |
| created_at | 2021-03-19T02:49:30Z |
| disk_format | qcow2 |
| file | /v2/images/47ce186e-fc72-4c96-8c3a-dd56a7e79d6c/file |
| id | 47ce186e-fc72-4c96-8c3a-dd56a7e79d6c |
| min_disk | 0 |
| min_ram | 0 |
| name | centos |
| owner | a7359cd645254a7c88f448cadf48d2bb |
| properties | os_hash_algo=‘sha512’, os_hash_value=‘3a045b71ed3ca5c8c86bcb9d94e9b6689756816ef1b3817ff31ae27fe2eed65bffff190fd62be26ad1619b563d34f8896e6c8b6446808d1ce30e5948b3d03ac1’, os_hidden=‘False’ |
| protected | False |
| schema | /v2/schemas/image |
| size | 801231872 |
| status | active |
| tags | |
| updated_at | 2021-03-19T02:49:49Z |
| virtual_size | None |
| visibility | public |
±-----------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]# openstack image list
±-------------------------------------±-------±-------+
| ID | Name | Status |
±-------------------------------------±-------±-------+
| 47ce186e-fc72-4c96-8c3a-dd56a7e79d6c | centos | active |
±-------------------------------------±-------±-------+
3.8、在controller节点部署 placement 放置服务
创建数据库
mysql -uroot -p123456 -e “create database placement;”
mysql -uroot -p123456 -e “grant all privileges on placement.* to ‘placement’@‘localhost’ identified by ‘PLACEMENT_DBPASS’;”
mysql -uroot -p123456 -e “grant all privileges on placement.* to ‘placement’@’%’ identified by ‘PLACEMENT_DBPASS’;”
创建 Placement 服务用户和 API 实体
openstack user create --domain default --password PLACEMENT_DBPASS placement
openstack role add --project service --user placement admin
openstack service create --name placement --description “Placement API” placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
安装软件包
yum install openstack-placement-api -y
修改 plancement 配置文件
cp /etc/placement/placement.conf{,.bak}
grep -Ev ‘^$|#’ /etc/placement/placement.conf.bak > /etc/placement/placement.conf

openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement

配置文件备份以后可以直接使用下面命令修改配置文件,注意修改主机名,端口和账号密码

cat > /etc/placement/placement.conf << EOF
[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = PLACEMENT_DBPASS

[oslo_policy]
[placement]
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[profiler]
EOF

导入数据库
su -s /bin/sh -c “placement-manage db sync” placement
修改 00-placement-api.conf
vi /etc/httpd/conf.d/00-placement-api.conf

在最后面加上如下内容

<Directory /usr/bin>
= 2.4>
Require all granted

<IfVersion < 2.4>
Order allow,deny
Allow from all


重启 Apache 服务
systemctl restart httpd
检查 placement 健康状态
placement-status upgrade check

有以下输出

[root@controller ~]# placement-status upgrade check
±---------------------------------+
| Upgrade Check Results |
±---------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
±---------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
±---------------------------------+
3.9、部署Nova 计算服务
Nova 计算服务为 OpenStack 云环境提供了计算能力,相关环境需要在控制节点和计算节点分别进行部署

在 controller 上配置 Nova 服务

创建数据块授权

mysql -uroot -p123456 -e “create database nova_api;”
mysql -uroot -p123456 -e “create database nova;”
mysql -uroot -p123456 -e “create database nova_cell0;”

mysql -uroot -p123456 -e “grant all privileges on nova_api.* to ‘nova’@‘localhost’ identified by ‘NOVA_DBPASS’;”
mysql -uroot -p123456 -e “grant all privileges on nova_api.* to ‘nova’@’%’ identified by ‘NOVA_DBPASS’;”

mysql -uroot -p123456 -e “grant all privileges on nova.* to ‘nova’@‘localhost’ identified by ‘NOVA_DBPASS’;”
mysql -uroot -p123456 -e “grant all privileges on nova.* to ‘nova’@’%’ identified by ‘NOVA_DBPASS’;”

mysql -uroot -p123456 -e “grant all privileges on nova_cell0.* to ‘nova’@‘localhost’ identified by ‘NOVA_DBPASS’;”
mysql -uroot -p123456 -e “grant all privileges on nova_cell0.* to ‘nova’@’%’ identified by ‘NOVA_DBPASS’;”

创建 nova 用户及实体

openstack user create --domain default --password NOVA_DBPASS nova
openstack role add --project service --user nova admin
openstack service create --name nova --description “OpenStack Compute” compute

创建 compute API 服务端点

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

安装软件包

yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

修改配置文件

cp -a /etc/nova/nova.conf{,.bak}
grep -Ev ‘^$|#’ /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata

注意这个IP是controller的IP

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.101
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_DBPASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen ’ $my_ip’
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ’ $my_ip’
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_DBPASS

导入数据库

su -s /bin/sh -c “nova-manage api_db sync” nova
su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova
su -s /bin/sh -c “nova-manage cell_v2 create_cell --name=cell1 --verbose” nova
su -s /bin/sh -c “nova-manage db sync” nova

验证 nova cell0 和 cell1 是否正确注册

su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova

启动 Nova 服务并配置开机启动

systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
在 compute01 节点配置 Nova 服务

安装软件包

yum install openstack-nova-compute -y

修改配置文件

cp -a /etc/nova/nova.conf{,.bak}
grep -Ev ‘^$|#’ /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller

注意这个IP是compute01的IP

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.137.154
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_DBPASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ’ $my_ip’
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_DBPASS
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

判断计算机是否支持虚拟机硬件加速

egrep -c ‘(vmx|svm)’ /proc/cpuinfo

返回0则计算节点不支持硬件加速,并且必须配置 libvirt 为使用 QEMU,而不是 KVM,需要编辑/etc/nova/nova.conf 文件中的[libvirt]部分

vi /etc/nova/nova.conf
[libvirt]
virt_type = qemu

开启 Nova 计算服务并配置开机启动

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

请注意:当有多个计算节点,并通过 scp 命令将配置文件拷贝到其他计算节点时,如果发现无法启动服务,且错误为:“Failed to open some config files: /etc/nova/nova.conf”,那么主要原因是配置文件权限错误,需修改 nova.conf 文件的属主和属组为 root

controller 节点后续操作

添加计算节点到 cell 数据库

openstack compute service list --service nova-compute

发现计算节点

su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova

有以下输出

[root@controller ~]# openstack compute service list --service nova-compute
±—±-------------±----------±-----±--------±------±---------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
±—±-------------±----------±-----±--------±------±---------------------------+
| 8 | nova-compute | compute01 | nova | enabled | up | 2021-03-19T03:53:55.000000 |
±—±-------------±----------±-----±--------±------±---------------------------+
[root@controller ~]# su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell ‘cell1’: cf1e5961-c676-4ff1-80b7-8bbfc8eb4d3f
Checking host mapping for compute host ‘compute01’: 874c7c95-99a3-405f-b0b3-19cb5b442639
Creating host mapping for compute host ‘compute01’: 874c7c95-99a3-405f-b0b3-19cb5b442639
Found 1 unmapped computes in cell: cf1e5961-c676-4ff1-80b7-8bbfc8eb4d3f

以后添加新的计算节点时,必须在控制器节点上运行 su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova 以注册这些新的计算节点。

设置适当的发现时间间隔(可选操作)

vi /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

修改完配置文件后重启服务

systemctl restart openstack-nova-api.service

验证计算服务,列出当前的服务组件

openstack compute service list
[root@controller ~]# openstack compute service list
±—±---------------±-----------±---------±--------±------±---------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
±—±---------------±-----------±---------±--------±------±---------------------------+
| 5 | nova-conductor | controller | internal | enabled | up | 2021-03-19T03:57:54.000000 |
| 6 | nova-scheduler | controller | internal | enabled | up | 2021-03-19T03:57:55.000000 |
| 8 | nova-compute | compute01 | nova | enabled | up | 2021-03-19T03:57:55.000000 |
±—±---------------±-----------±---------±--------±------±---------------------------+
4.0、部署 Neutron(在 controller 配置 Neutron)
创建数据库
mysql -uroot -p123456 -e “create database neutron;”
mysql -uroot -p123456 -e “grant all privileges on neutron.* to ‘neutron’@‘localhost’ identified by ‘NEUTRON_DBPASS’;”
mysql -uroot -p123456 -e “grant all privileges on neutron.* to ‘neutron’@’%’ identified by ‘NEUTRON_DBPASS’;”
创建用户
openstack user create --domain default --password NEUTRON_DBPASS neutron
openstack role add --project service --user neutron admin
创建 Neutron 服务实体及 API endpoints
openstack service create --name neutron --description “OpenStack Networking” network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
安装 Neutron 组件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
配置 neutron.conf 文件
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_DBPASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

/etc/neutron/neutron.conf文件添加以下内容

vi /etc/neutron/neutron.conf
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_DBPASS

修改 ML2 plugin 配置文件 ml2_conf.ini

cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

修改 linux bridge network provider 配置文件 linuxbridge_agent.ini

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

ens33 指本地外部网卡,还要注意修改local_ip

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.137.150
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
修改l3_agent.ini
mv /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini_bak
cat > /etc/neutron/l3_agent.ini << EOF
[DEFAULT]
interface_driver = linuxbridge
EOF
修改内核参数
echo ‘net.bridge.bridge-nf-call-iptables=1’ >> /etc/sysctl.conf
echo ‘net.bridge.bridge-nf-call-ip6tables=1’ >> /etc/sysctl.conf

modprobe br_netfilter
sysctl -p
修改 dhcp agent 配置文件 dhcp_agent.ini
cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#̲' /etc/neutron/…|#’ /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET
修改 nova 的配置文件,用于 neutron 交互
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_DBPASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET
创建 ML2 插件文件符号连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库
su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf
–config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
重启计算 nova-API 服务
systemctl restart openstack-nova-api.service
开启 Neutron 相关服务并配置开机启动
systemctl enable neutron-server.service
neutron-linuxbridge-agent.service neutron-dhcp-agent.service
neutron-metadata-agent.service

systemctl start neutron-server.service
neutron-linuxbridge-agent.service neutron-dhcp-agent.service
neutron-metadata-agent.service
检查 9696 端口是否开启
netstat -nlpt | grep 9696
4.1、在 compute01 配置 Neutron
安装 Neutron 软件包
yum install openstack-neutron-linuxbridge ebtables ipset -y
修改配置文件
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#̲' /etc/neutron/…|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

修改为外网网卡名称,还有local_ip注意修改

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.1.1.102
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
修改内核
echo ‘net.bridge.bridge-nf-call-iptables=1’ >> /etc/sysctl.conf
echo ‘net.bridge.bridge-nf-call-ip6tables=1’ >> /etc/sysctl.conf

modprobe br_netfilter
sysctl -p
修改 nova 配置文件
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_DBPASS
重启 openstack-nova-compute 服务,配置网络服务
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
验证操作

在 controller 节点执行以下操作,验证 Neutron 组件服务

openstack extension list --network
openstack network agent list

有以下回显

[root@controller ~]# openstack network agent list
±-------------------------------------±-------------------±-----------±------------------±------±------±--------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
±-------------------------------------±-------------------±-----------±------------------±------±------±--------------------------+
| 4b97dfdc-b6f3-49e4-9363-cc1d9a2db7d3 | Linux bridge agent | controller | None | 😃 | UP | neutron-linuxbridge-agent |
| 4f8c41c7-d583-4794-bccb-3c30ab87439e | Metadata agent | controller | None | 😃 | UP | neutron-metadata-agent |
| 9845c192-53d6-483d-be36-1d4953e98de0 | Linux bridge agent | compute01 | None | 😃 | UP | neutron-linuxbridge-agent |
| c4bf51b1-f8ea-4305-9b2f-0cd912ecb578 | DHCP agent | controller | nova | 😃 | UP | neutron-dhcp-agent |
±-------------------------------------±-------------------±-----------±------------------±------±------±--------------------------+
4.2、在compute01部署 Dashboard
安装 Dashboard 软件包
yum install openstack-dashboard httpd -y
修改 Dashboard 配置文件
vi /etc/openstack-dashboard/local_settings

修改以下几项内容

ALLOWED_HOSTS = [’*’]
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
},
}
OPENSTACK_HOST = “controller”
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_NEUTRON_NETWORK = {
‘enable_auto_allocated_network’: False,
‘enable_distributed_router’: False,
‘enable_fip_topology_check’: True,
‘enable_ha_router’: False,
‘enable_lb’: False,
‘enable_filwall’: False,
‘enable_vpn’: False,
‘enable_ha_router’: False,
‘enable_ipv6’: True,
# TODO(amotoki): Drop OPENSTACK_NEUTRON_NETWORK completely from here.
# enable_quotas has the different default value here.
‘enable_quotas’: True,
‘enable_rbac_policy’: True,
‘enable_router’: True,

'default_dns_nameservers': [],
'supported_provider_types': ['*'],
'segmentation_id_range': {},
'extra_provider_types': {},
'supported_vnic_types': ['*'],
'physical_networks': [],

}

TIME_ZONE = “Asia/Shanghai”
重启服务

重新生成 openstack-dashboard.conf 并重启 Apache 服务(由于 dashborad 会重新复制代码文件,重启 apache 会比较慢)

cd /usr/share/openstack-dashboard

python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
systemctl enable httpd.service
systemctl restart httpd.service
重启 memcached 服务

重启 controller 节点的 memcache 服务

systemctl restart memcached.service
验证操作
打开浏览器,在地址栏中输入“http://10.1.1.102”,进入 Dashboard 登录页面。在登录页面依次填写:“域:default、用户名:admin、密码:ADMIN_PASS”。完成后,单击“登录”按钮
4.3、部署 Cinder
在 controller 配置 Cinder

创建数据库并授权

mysql -uroot -p123456 -e “create database cinder;”
mysql -uroot -p123456 -e “grant all privileges on cinder.* to ‘cinder’@‘localhost’ identified by ‘CINDER_DBPASS’;”
mysql -uroot -p123456 -e “grant all privileges on cinder.* to ‘cinder’@’%’ identified by ‘CINDER_DBPASS’;”

创建 Cinder 服务凭据

openstack user create --domain default --password CINDER_DBPASS cinder
openstack role add --project service --user cinder admin
openstack service create --name cinderv2 --description “OpenStack Block Storage” volumev2
openstack service create --name cinderv3 --description “OpenStack Block Storage” volumev3
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s

安装 Cinder 相关软件包

yum install openstack-cinder -y

配置Cinder

cp /etc/cinder/cinder.conf{,.bak}
grep -Ev ‘#|^$’ /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_DBPASS

修改为 controller的IP

openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.137.150
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

同步 cinder 数据库

su -s /bin/sh -c “cinder-manage db sync” cinder

配置 Nova

vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

重启服务

systemctl restart openstack-nova-api.service

配置 Cinder 服务

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
在 block01 配置 Cinder

配置 block01 节点的 YUM 源、安装 Cinder 相关软件和配置 LVM 服务

yum install centos-release-openstack-train -y
yum install python-openstackclient -y
yum install openstack-selinux -y
yum install openstack-cinder targetcli python-keystone -y
yum install lvm2 device-mapper-persistent-data -y
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

创建 LVM 物理卷和卷组

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

修改 LVM 配置文件

vi /etc/lvm/lvm.conf
devices {
filter = [ “a/sdb/”, “r/.*/”]
}

配置 Cinder

cp /etc/cinder/cinder.conf{,.bak}
grep -Ev ‘#|^$’ /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
vi /etc/cinder/cinder.conf

[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 10.1.1.103
enabled_backends = lvm
glance_api_servers = http://controller:9292
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_DBPASS
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

配置 Cinder 服务

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
验证 Cinder 配置

在 controller 节点执行

openstack volume service list

有以下回显,至此OpenStack核心组件安装完成

[root@controller ~]# openstack volume service list
±-----------------±------------±-----±--------±------±---------------------------+
| Binary | Host | Zone | Status | State | Updated At |
±-----------------±------------±-----±--------±------±---------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2021-03-19T06:14:40.000000 |
| cinder-volume | block01@lvm | nova | enabled | up | 2021-03-19T06:14:45.000000 |
±-----------------±------------±-----±--------±------±---------------------------+
启动neutron服务
systemctl start neutron-l3-agent
三、备注

创建网络

openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

创建子网

openstack subnet create --network provider --allocation-pool start=10.152.35.244,end=10.152.35.250 --dns-nameserver 8.8.4.4 --gateway 10.152.35.1 --subnet-range 10.152.35.0/24 provider-subnet

创建实例类型

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值