OpenStack Stein版部署

文章目录


一、测试环境

主机名IP地址
controllerens33:192.168.100.10 / ens34: 无IP地址
computeens33:192.168.100.20 / ens34: 无IP地址

ens33为管理网卡、ens34为外部网卡(IP可有可无)

二、环境准备(controller和compute)

1. 修改主机名称

[root@localhost ~]# hostnamectl set-hostname controller 
[root@localhost ~]# hostnamectl set-hostname compute
[root@controller ~]# vi /etc/hosts 
192.168.100.10  controller
192.168.100.20  compute
[root@controller ~]# scp -r /etc/hosts compute:/etc/hosts

2. 关闭防火墙、selinux、NetworkManger

[root@controller ~]# systemctl stop firewalld 
[root@controller ~]# systemctl disable  firewalld 
[root@controller ~]# vi /etc/selinux/config 
SELINUX=disabled
[root@controller ~]# setenforce 0
[root@controller ~]# systemctl stop NetworkManager.service
[root@controller ~]# systemctl disable  NetworkManager.service
然后关闭

3. 安装时间服务器

3.1 controller节点
[root@controller ~]# yum install -y chrony 
[root@controller ~]# vi /etc/chrony.conf 
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst 
minsources 2
allow 192.168.100.0/24
local stratum 10
[root@controller ~]# systemctl restart chronyd 
[root@controller ~]# systemctl enable  chronyd        
[root@controller ~]# chronyc sources 
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^- controller                   10   6    17     8  -5366ns[-5366ns] +/-   17us
3.2 compute节点
[root@compute ~]# yum install -y chrony
[root@compute ~]# vi /etc/chrony.conf 
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
[root@compute ~]# systemctl restart chronyd 
[root@compute ~]# systemctl enable  chronyd        
[root@compute ~]# chronyc sources 
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* controller                   10   6    17     5  +1079ns[  +57us] +/- 1519us

4. 安装openstack软件包(controller和compute)

[root@controller ~]# yum install -y centos-release-openstack-stein.noarch  ##安装openstack源
[root@controller ~]# yum upgrade   ##更新一下系统
[root@controller ~]# yum install python-openstackclient -y  ##安装openstack客户端
[root@controller ~]# yum install -y openstack-utils  ##安装openstack快速配置工具
内含openstack-config ,可以帮助我们快速配置

三、安装数据库服务(controller节点)

安装并配置数据库服务
[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL
[root@controller ~]# vi /etc/my.cnf.d/openstack.cnf  
[mysqld]
bind-address = 192.168.100.10     #控制节点主机ip
default-storage-engine = innodb     #默认存储引擎
innodb_file_per_table = on         #每张表独立表空间文件
max_connections = 4096           #最大连接数
collation-server = utf8_general_ci    #默认字符集
character-set-server = utf8

启动数据库服务
[root@controller ~]# systemctl start mariadb 
[root@controller ~]# systemctl enable  mariadb       
[root@controller ~]# mysql_secure_installation 

四、安装消息队列服务(controller节点)

[root@controller ~]#  yum install rabbitmq-server -y
[root@controller ~]# systemctl start rabbitmq-server 
[root@controller ~]# systemctl enable  rabbitmq-server  

创建openstack用户并赋权
[root@controller ~]# rabbitmqctl add_user openstack 000000
Creating user "openstack"
[root@controller ~]#  rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"

开启dashboard界面功能
[root@controller ~]#  rabbitmq-plugins enable rabbitmq_management

然后我们就可以通过http://192.168.100.10:15672去访问了

五、安装缓存服务(controller节点)

[root@controller ~]#  yum install memcached python-memcached
[root@controller ~]# vi /etc/sysconfig/memcached 
OPTIONS="-l 0.0.0.0"  #修改为这个
[root@controller ~]# systemctl start memcached
[root@controller ~]# systemctl enable  memcached
[root@controller ~]# ss -tan | grep 11211
LISTEN     0      128          *:11211                    *:*   

六、Keystone认证服务安装(controller节点)

1. 创建keystone数据库

MariaDB [(none)]> Create database keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'  IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'  IDENTIFIED BY '000000';   

2. 安装keystone服务和相关软件包

[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y 

3. 修改keystone配置文件

配置数据库连接
[root@controller ~]# openstack-config --set  /etc/keystone/keystone.conf database connection  mysql+pymysql://keystone:000000@controller/keystone
[root@controller ~]# openstack-config --set  /etc/keystone/keystone.conf token provider  fernet

4. 初始化keystone数据库

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
查看是否生成表
[root@controller ~]# mysql -ukeystone -p000000 -e 'use keystone;show tables;'

5. 初始化Fernet密钥存储库

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

6. 配置keystone服务端点

 keystone-manage bootstrap --bootstrap-password 000000 \
    --bootstrap-admin-url http://controller:5000/v3/ \
    --bootstrap-internal-url http://controller:5000/v3/ \
    --bootstrap-public-url http://controller:5000/v3/ \
    --bootstrap-region-id RegionOne

7. 配置httpd服务

修改ServerName
[root@controller ~]# echo "ServerName controller" >> /etc/httpd/conf/httpd.conf
创建wsgi配置文件软连接
[root@controller ~]#  ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动apache服务
[root@controller ~]# systemctl start httpd 
[root@controller ~]# systemctl enable  httpd   

8. 创建环境变量文件

[root@controller ~]# vi admin.sh 
export OS_USERNAME=admin
export OS_PASSWORD=000000  #促使化Fernet时设置的密码
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

测试
[root@controller ~]# source admin.sh 
[root@controller ~]# openstack token issue 
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2021-10-14T07:59:57+0000                                                                                                                                                                |
| id         | gAAAAABhZ9Vt-hB9av_E6IXEej2AjZH9khYOJMvTnZLwMrj1dzd5IuT8OV6ZmuqtVCIxZwA2Y03wcSnV5meLilF0suWbOQNmMzN98MMOVeM7yKRv7Qr0tEZSivkZ06IowdGdZH8nvXBBb97T3uObo5UYmHMjPDV_-pTjyViW8p4QXyVxPKlUiAI |
| project_id | 59f67572d32c406e8e3bb3734c9f125e                                                                                                                                                        |
| user_id    | 24caf092b4ee470287f8a9f535a6e927                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

9. 创建项目

9.1 创建service服务
[root@controller ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | d0c06efb87a0452d85bf931c55781596 |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+
9.2 创建user角色
[root@controller ~]# openstack role create user
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | None                             |
| domain_id   | None                             |
| id          | 717fa662a6db4c63822f69301ace64e5 |
| name        | user                             |
+-------------+----------------------------------+

七、glance镜像服务安装(controller节点)

1. 创建glance数据库

MariaDB [(none)]> create database glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '000000';  

2. 安装glance软件包

[root@controller ~]# yum install openstack-glance -y 

3. 创建glance用户

[root@controller ~]# openstack user create --domain default --password 000000 glance
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | f99340b0d913444ab6fd75ed49643bb8 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
角色赋予
[root@controller ~]# openstack role add --project service --user glance admin

4. 创建image服务

[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | 9142d06eacb94f31ae6b103a5e6c0f03 |
| name        | glance                           |
| type        | image                            |

5. 创建镜像image服务API访问端点

5.1 创建public端点
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | bf69d16ede8941478ffbdd417560b1c6 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9142d06eacb94f31ae6b103a5e6c0f03 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
5.2 创建internal端点
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 961e84e20ee04a41909923e53eda865a |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9142d06eacb94f31ae6b103a5e6c0f03 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
5.3 创建admin端点
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | bf670d0cde4f46f68190266b386443ed |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 9142d06eacb94f31ae6b103a5e6c0f03 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

6. 配置glance-api

6.1 配置glance-api数据库连接
[root@controller ~]# openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:000000@controller/glance 
6.2 配置glance-api相关认证信息
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_:uri   http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url  http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name  Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name  Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password  000000
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor  keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores  file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store  file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir  /var/lib/glance/images/

7. 配置glance-registry

openstack-config--set/etc/glance/glance-registry.conf database connection mysql+pymysql://glance:000000@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri   http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url   http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name  Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name  Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password  000000
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor  keystone

8. 初始化glance数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

9. 启动glance服务

[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
[root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service

10. 上传cirros镜像测试

[root@controller ~]# glance image-create --name cirros --disk-format qcow2 --container-format bare --progress < cirros-0.5.1-x86_64-disk.img 
[root@controller ~]# glance image-list 
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| e694696d-0313-427d-94cc-681bc72853dd | cirros |
+--------------------------------------+--------+

八、placement服务安装(controller节点)

1. 创建placement数据库

MariaDB [(none)]> create database placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '000000'; 

2. 创建placement用户

[root@controller ~]# openstack user create --domain default --password 000000 placement
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 938f1f2abbc0459b9d8b22b03632d4b5 |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

角色赋权
[root@controller ~]# openstack role add --project service --user placement admin

3. 创建placement服务

[root@controller ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 7f5de6fb71d947ff9349bfaf7dd83c8a |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+

4. 创建placement服务API访问端点

4.1 创建public端点
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 2045ca3de7e1420683c979b151435cfb |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 7f5de6fb71d947ff9349bfaf7dd83c8a |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
4.2 创建internal端点
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ef93114b57a9435a9da9b9eb774e3391 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 7f5de6fb71d947ff9349bfaf7dd83c8a |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
4.3 创建admin端点
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 7a9b40187d344d22bd233c1393127a61 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 7f5de6fb71d947ff9349bfaf7dd83c8a |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+

5. 安装placement软件包

[root@controller ~]# yum install openstack-placement-api -y

6. 配置placement服务

openstack-config --set  /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:000000@controller/placement
openstack-config --set  /etc/placement/placement.conf api auth_strategy keystone
openstack-config --set  /etc/placement/placement.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set  /etc/placement/placement.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set  /etc/placement/placement.conf keystone_authtoken auth_type password
openstack-config --set  /etc/placement/placement.conf keystone_authtoken project_domain_name Default
openstack-config --set  /etc/placement/placement.conf keystone_authtoken user_domain_name Default
openstack-config --set  /etc/placement/placement.conf keystone_authtoken project_name service
openstack-config --set  /etc/placement/placement.conf keystone_authtoken username placement
openstack-config --set  /etc/placement/placement.conf keystone_authtoken password 000000

7. 初始化placement数据库

[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

8. 编辑placement的apache 配置

[root@controller ~]# vi /etc/httpd/conf.d/00-placement-api.conf 
Listen 8778

<VirtualHost *:8778>
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement
  WSGIScriptAlias / /usr/bin/placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/placement/placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
  #增加 begin
   <Directory /usr/bin>
      <IfVersion >= 2.4>
         Require all granted
      </IfVersion>
      <IfVersion < 2.4>
         Order allow,deny
         Allow from all
      </IfVersion>
   </Directory>
  #增加 end
</VirtualHost>

Alias /placement-api /usr/bin/placement-api
<Location /placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>

9. 重启apache服务

[root@controller ~]# systemctl restart httpd 
[root@controller ~]# ss -tan | grep 8778    ##查看placement的端口
LISTEN     0      128       [::]:8778                  [::]:*                  
[root@controller ~]# placement-status upgrade check    ##检测placement的健康状态
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

九、nova计算服务安装(controller节点)

1. 创建nova数据库

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000';         
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '000000'; 

2. 创建nova用户

[root@controller ~]# openstack user create --domain default --password 000000 nova
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 8e2aabd78d9c445d8ea763ba726965c2 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
权限赋予
[root@controller ~]# openstack role add --project service --user nova admin

3. 创建nova服务

[root@controller ~]#   openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | d418f8b5a1dd49f2b2d5b972028b643c |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

4. 创建nova服务API访问端点

4.1 创建 public端点
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4db4584505f745fb8f849d7ef16d5fe0 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | d418f8b5a1dd49f2b2d5b972028b643c |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
4.1 创建internal端点
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a40e229481a24eda90dde6117cf196c6 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | d418f8b5a1dd49f2b2d5b972028b643c |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
4.2 创建admin端点
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | dbc892aac6d345528638a6b69a422b9d |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | d418f8b5a1dd49f2b2d5b972028b643c |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+

5. 安装nova软件包

[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor   openstack-nova-novncproxy openstack-nova-scheduler -y

6. 配置nova服务

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
#填写控制节点本机物理IP
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip  192.168.100.10
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron  true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
#rabbitmq  openstack账号的密码 可通过http://controller:15672/#/users设置
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:000000@controller
#nova_api数据库密码
openstack-config --set /etc/nova/nova.conf api_database connection  mysql+pymysql://nova:000000@controller/nova_api
#nova数据库密码
openstack-config --set /etc/nova/nova.conf database connection  mysql+pymysql://nova:000000@controller/nova
openstack-config --set /etc/nova/nova.conf api auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url  http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type  password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name  Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name  Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name  service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username  nova
#nova账号密码
openstack-config --set /etc/nova/nova.conf keystone_authtoken password  000000
openstack-config --set /etc/nova/nova.conf vnc enabled  true
openstack-config --set /etc/nova/nova.conf vnc server_listen  '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name  RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name  Default
openstack-config --set /etc/nova/nova.conf placement project_name  service
openstack-config --set /etc/nova/nova.conf placement auth_type  password
openstack-config --set /etc/nova/nova.conf placement user_domain_name  Default
openstack-config --set /etc/nova/nova.conf placement auth_url  http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username  placement
#placement账号密码
openstack-config --set /etc/nova/nova.conf placement password 000000

6. 初始化nova数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
f239f825-6c2b-4eb5-a702-d3ca45e2cb31
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

7. 验证nova cell0和cell1是否正确注册

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
|  Name |                 UUID                 |           Transport URL            |               Database Connection               | Disabled |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@controller/nova_cell0 |  False   |
| cell1 | f239f825-6c2b-4eb5-a702-d3ca45e2cb31 | rabbit://openstack:****@controller |    mysql+pymysql://nova:****@controller/nova    |  False   |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+

8. 启动nova服务

[root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

十、nova计算服务安装(compute节点)

1. 安装nova软件包

[root@compute ~]# yum install openstack-nova-compute -y

2. 配置nova服务

openstack-config --set  /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
#填写Rabbit密码
openstack-config --set  /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:000000@controller
#密码填写计算节点物理IP
openstack-config --set  /etc/nova/nova.conf DEFAULT my_ip 192.168.100.20
openstack-config --set  /etc/nova/nova.conf DEFAULT use_neutron  true
openstack-config --set  /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set  /etc/nova/nova.conf api auth_strategy  keystone
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_url  http://controller:5000/v3
openstack-config --set  /etc/nova/nova.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_domain_name  Default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken user_domain_name  Default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_name  service
openstack-config --set  /etc/nova/nova.conf keystone_authtoken username  nova
#nova账户密码
openstack-config --set  /etc/nova/nova.conf keystone_authtoken password  000000
openstack-config --set  /etc/nova/nova.conf vnc enabled  true
openstack-config --set  /etc/nova/nova.conf vnc server_listen  0.0.0.0
openstack-config --set  /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
openstack-config --set  /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set  /etc/nova/nova.conf glance api_servers  http://controller:9292
openstack-config --set  /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set  /etc/nova/nova.conf placement region_name  RegionOne
openstack-config --set  /etc/nova/nova.conf placement project_domain_name  Default
openstack-config --set  /etc/nova/nova.conf placement project_name  service
openstack-config --set  /etc/nova/nova.conf placement auth_type  password
openstack-config --set  /etc/nova/nova.conf placement user_domain_name  Default
openstack-config --set  /etc/nova/nova.conf placement auth_url  http://controller:5000/v3
openstack-config --set  /etc/nova/nova.conf placement username  placement
#plance账户密码
openstack-config --set  /etc/nova/nova.conf placement password  000000
#如果计算节点不支持硬件加速 则填写qemu(虚拟机中 建议填写qemu)
openstack-config --set  /etc/nova/nova.conf libvirt virt_type  qemu

3. 启动nova服务

[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service

4. controller节点查看

[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+---------+------+---------+-------+----------------------------+
| ID | Binary       | Host    | Zone | Status  | State | Updated At                 |
+----+--------------+---------+------+---------+-------+----------------------------+
|  7 | nova-compute | compute | nova | enabled | up    | 2021-10-15T02:22:58.000000 |
+----+--------------+---------+------+---------+-------+----------------------------+

配置自动发现计算节点
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

设置发现的间隔
[root@controller ~]# openstack-config --set  /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300

重启nova-api服务
[root@controller ~]# systemctl restart openstack-nova-api.service

十一、neutron网络服务安装(controller节点)

1. 创建neutron数据库

MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';

2. 创建neutron用户

[root@controller ~]# openstack user create --domain default --password 000000 neutron
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 1089750ebd4642368821f5910b2d6312 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

角色赋权
[root@controller ~]# openstack role add --project service --user neutron admin

3. 创建neutron服务

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | ec0cf20e170244f0acd12a3aed67834a |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

4. 创建neutron服务API访问端点

4.1 创建public端点
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4f59b816d610461aafa9a83de034d901 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | ec0cf20e170244f0acd12a3aed67834a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
4.2 创建internal端点
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 6b4b4a7e24fc46e18ab763e6088b07da |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | ec0cf20e170244f0acd12a3aed67834a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
4.3 创建admin端点
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 0c15abd9b57444208486c3e880b63a80 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | ec0cf20e170244f0acd12a3aed67834a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

5. 安装neutron软件包

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2  ebtables -y

6. 配置neutron服务

6.1 安装neutron-linuxbridge软件包
网络模式我们采用Linuxbridge桥接模式
[root@controller ~]# yum install -y openstack-neutron-linuxbridge
6.2 配置neutron.conf配置文件
openstack-config --set  /etc/neutron/neutron.conf database connection  mysql+pymysql://neutron:000000@controller/neutron
openstack-config --set  /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set  /etc/neutron/neutron.conf DEFAULT service_plugins router,metering
openstack-config --set  /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
openstack-config --set  /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set  /etc/neutron/neutron.conf DEFAULT transport_url  rabbit://openstack:000000@controller
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  true
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  true
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url  http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name  Default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name  Default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name  service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username  neutron
#填写neutron密码
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password  000000
openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set  /etc/neutron/neutron.conf nova auth_url http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf nova auth_type password
openstack-config --set  /etc/neutron/neutron.conf nova project_domain_name Default
openstack-config --set  /etc/neutron/neutron.conf nova user_domain_name Default
openstack-config --set  /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set  /etc/neutron/neutron.conf nova project_name service
openstack-config --set  /etc/neutron/neutron.conf nova username nova
#填写nova密码
openstack-config --set  /etc/neutron/neutron.conf nova password 000000
6.3 配置模块化第2层(ML2)插件
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini 
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan   #内部网络的类型为vxlan类型
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider    #外部网络的标签,创建外部网络时用的
[securitygroup]
enable_ipset = true
[ml2_type_vxlan]
vni_ranges = 1:1000
[ml2_type_vlan]
network_vlan_ranges = default:3001:4000
6.4 配置Linuxbridge网桥代理
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini          
[linux_bridge]
physical_interface_mappings = provider:ens34   #ens34为外网口
[vxlan]
enable_vxlan = true
local_ip = 192.168.100.10     #管理网卡IP地址 隧道IP地址
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
6.5 配置L3 agent
只在controller节点运行,不执行这个创建的虚拟路由器端口会显示为down
[root@controller ~]# vi /etc/neutron/l3_agent.ini 
[DEFAULT]
verbose = true
interface_driver = linuxbridge
6.6 修改内核配置,开启bridge-nf
[root@controller ~]# vi /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

加载br_netfilter模块
[root@controller ~]# modprobe br_netfilter
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
6.7 配置DHCP代理
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini 
[DEFAULT]
interface_driver = linuxbridge   
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
6.8 配置元数据代理
[root@controller ~]# vi /etc/neutron/metadata_agent.ini                  
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 000000  #这个密码自定义
6.9 修改nova与neutron的交互配置
[root@controller ~]# vi /etc/nova/nova.conf                  
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
service_metadata_proxy = true
metadata_proxy_shared_secret = 000000   #元数据的密码
6.10 创建ml2配置软连接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
6.11 初始化neutron数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
6.12 启动服务
[root@controller ~]# systemctl restart openstack-nova-api.service  #重启nova-api服务
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service  

十二、neutron网络服务安装(compute节点)

1. 安装neutron相关软件包

[root@compute ~]#  yum install openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables ipset -y

2. 配置neutron.conf文件


[DEFAULT]
transport_url = rabbit://openstack:000000@controller  #rabbitmq的用户及密码
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 000000
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

3. 配置Linuxbridge代理

[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini             
[linux_bridge]
physical_interface_mappings = provider:ens33   #这里为ens33 
[vxlan]
enable_vxlan = true
local_ip = 192.168.100.20   #隧道IP地址
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

4. 修改内核配置,开启bridge-nf

[root@compute ~]# vi /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

加载br_netfilter模块
[root@compute ~]# modprobe br_netfilter
[root@compute ~]# sysctl -p 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

5. 修改nova配置文件

[root@compute ~]# vi /etc/nova/nova.conf                 
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = 000000

6. 启动服务

[root@compute ~]# systemctl restart openstack-nova-compute.service  重启nova服务
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

7. controller节点验证neutron服务

[root@controller ~]# openstack network agent list 
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 3adf1be0-6bb2-4407-97a8-4650c853c37a | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 608949e5-f0de-451e-9c58-d43d47901ed8 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 8bbbd0b9-80ee-49d7-936c-2d357847c0a6 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
| b237a133-3ecd-485a-8270-dd5708a175cd | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

十三、Horizon Dashboard服务安装(controller节点)

1. 安装Dashboard软件包

[root@controller ~]# yum install openstack-dashboard -y

2. 修改配置文件

[root@controller ~]# vi /etc/openstack-dashboard/local_settings 
25 WEBROOT = '/dashboard/'
39 ALLOWED_HOSTS = ['*']   #允许所有主机访问
65 OPENSTACK_API_VERSIONS = {
66     "identity": 3,
67     "image": 2,
68     "volume": 3,
69 }   
75 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
164 CACHES = {
165     'default': {
166         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'        ,
167         'LOCATION': 'controller:11211',
168     },
169 }
173 SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
193 OPENSTACK_HOST = "controller"
194 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
195 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
196 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
489 TIME_ZONE = "Asia/Shanghai"

3. 为apache生成dashboard配置

[root@controller ~]# cd /usr/share/openstack-dashboard/
[root@controller openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf

4. 配置软连接

[root@controller ~]#  ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf

5. 修改openstack-dashboard配置

[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf 
#将原有的配置注释掉,添加以下配置
#WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
#Alias /static /usr/share/openstack-dashboard/static
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static

6. 重启服务

[root@controller ~]# systemctl restart httpd 
[root@controller ~]# systemctl restart memcached

7. 浏览器访问

在这里插入图片描述

十四、cinder服务安装(controller)

1. 创建cinder数据库

MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000';

2. 创建cinder用户

[root@controller ~]# openstack user create --domain default --password 000000 cinder
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 8a7dc9f335594e9c9402cd6112e72eef |
| name                | cinder                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
角色赋权
[root@controller ~]# openstack role add --project service --user cinder admin

3. 创建cinderv2,cinderv3服务

[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 2cc5ef1b0b084e10984fc97837237b3d |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 50e3585035b54dff9aee9586994efb98 |
| name        | cinderv3                         |
| type        | volumev3                         |
+-------------+----------------------------------+

4. 创建cinderv2,cinderv3服务API访问端点

openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

5. 安装cinder软件包

[root@controller ~]# yum install openstack-cinder -y

6. 配置cinder服务

[root@controller ~]# vi /etc/cinder/cinder.conf 
[DEFAULT]
my_ip = 192.168.100.10   # 本地管理网卡IP地址
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
[database]
connection = mysql+pymysql://cinder:000000@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = 000000
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

7. 初始化cinder数据库

[root@controller ~]#  su -s /bin/sh -c "cinder-manage db sync" cinder

8. 配置nova块儿存储

[root@controller ~]# vi /etc/nova/nova.conf 
[cinder]
os_region_name = RegionOne

9. 启动服务

重启nova-api服务
[root@controller ~]# systemctl restart openstack-nova-api.service

启动并自启cinder服务
[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

检测
[root@controller ~]# cinder service-list 
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host       | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up    | 2021-10-15T07:19:40.000000 | -               |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

十五、cinder服务安装(compute)

1. 安装LVM

[root@compute ~]# yum install lvm2 device-mapper-persistent-data

2. 创建LVM物理卷

查看是否存在多余的磁盘,没有自行添加
[root@compute ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@compute ~]# pvcreate /dev/sdb 
  Physical volume "/dev/sdb" successfully created.

3. 创建LVM卷组

[root@compute ~]# vgcreate cinder-volumes /dev/sdb 
  Volume group "cinder-volumes" successfully created

4. 将LVM重新配置为仅扫描包含cinder-volumes卷组的设备

[root@compute ~]# vi /etc/lvm/lvm.conf 
devices {
........
filter = [ "a/sdb/","r/.*/" ]
a用于接受,r用于拒绝。

5. 安装cinder软件包

[root@compute ~]# yum install openstack-cinder targetcli python-keystone -y

6. 配置cinder服务

[root@compute ~]# vi /etc/cinder/cinder.conf 
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
my_ip = 192.168.100.20    #管理网卡IP地址
enabled_backends = lvm
glance_api_servers = http://controller:9292
[database]
connection = mysql+pymysql://cinder:000000@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = 000000
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes   #卷组的名称
target_protocol = iscsi
target_helper = lioadm

7. 启动服务

[root@compute ~]# systemctl start openstack-cinder-volume.service target.service
[root@compute ~]# systemctl enable openstack-cinder-volume.service target.service

8. controller节点验证cinder服务

[root@controller ~]# openstack volume service list 
+------------------+-------------+------+---------+-------+----------------------------+
| Binary           | Host        | Zone | Status  | State | Updated At                 |
+------------------+-------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller  | nova | enabled | up    | 2021-10-15T07:37:40.000000 |
| cinder-volume    | compute@lvm | nova | enabled | up    | 2021-10-15T07:37:42.000000 |
+------------------+-------------+------+---------+-------+----------------------------+

十六、创建云主机

1. 创建镜像

[root@controller ~]# glance image-create --name cirros --disk-format qcow2 --container-format bare --progress < cirros-0.5.1-x86_64-disk.img 

2. 创建外部网络

2.1 创建的网络类型为flat

在这里插入图片描述

2.2 创建子网

网关为外部网络真实的网关,可以访问外网的,最后dns写114.114.114.114

在这里插入图片描述

3. 创建内部网络

3.1 默认类型为vxlan

在这里插入图片描述

3.2 创建子网

这里的网段网关可以随便写,后面dns写114.114.114.114
在这里插入图片描述

4. 创建路由

4.1 绑定外部网络

在这里插入图片描述

4.2 绑定内部网络

在这里插入图片描述

5. 创建云主机的类型

在这里插入图片描述

6. 查看openstack的资源详情

在这里插入图片描述

当前可用磁盘容量为35GB,我们创建的云主机使用磁盘不会占用该35G,而是使用卷的方式来进行存储

7. 创建云主机

7.1 注意这个两个选项一定要选是

不选是那么将会使用35GB那个存储
在这里插入图片描述

7.2 创建完成后如下图

在这里插入图片描述

7.3 云主机访问外网测试

在这里插入图片描述

7.5 当打开控制台报找不到controller服务器的IP地址错误

解决如下:

进入compute节点操作
[root@compute ~]# vi /etc/nova/nova.conf 
novncproxy_base_url = http://controller:6080/vnc_auto.html
将以上这个改为控制节点的IP地址
novncproxy_base_url = http://192.168.100.10:6080/vnc_auto.html

重启nova服务
[root@compute ~]# systemctl restart openstack-nova*
  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ball-4444

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值