10.各个节点-Cinder组件
更多步骤:OpenStack Victoria版安装部署系列教程
OpenStack部署系列文章
OpenStack Victoria版 安装部署系列教程
OpenStack Ussuri版 离线安装部署系列教程(全)
OpenStack Train版 离线安装部署系列教程(全)
欢迎留言沟通,共同进步。
文章目录
请注意安装步骤所属的节点类型
0x01、控制节点-Cinder存储服务组件
在控制节点安装cinder存储服务
一、创建cinder相关数据库、服务凭证和API端点
1.创建cinder数据库,并授予合适的访问权限
mysql -u root -proot
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY '111111';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY '111111';
flush privileges;
show databases;
select user,host from mysql.user;
exit
2.创建cinder服务凭据
(1)在keystone上创建cinder用户
cd
source admin-openrc.sh
openstack user create --domain default --password=111111 cinder
openstack user list
(2)向cinder用户添加admin角色,并添加到service项目
下面的命令没有输出。
openstack role add --project service --user cinder admin
(3)创建cinder服务的实体
创建cinderv2和cinderv3服务实体
openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
openstack service list
3.创建cinder服务的API端点(endpoint)
openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
openstack endpoint list
二、cinder相关软件安装与配置
1.安装相关软件
dnf install openstack-cinder -y
2.修改cinder配置
(1)快速配置
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
grep -Ev '#|^$' /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
请注意my_ip
为控制节点的管理网IP地址。
(2)检查生效配置
egrep -v "^#|^$" /etc/cinder/cinder.conf
grep '^[a-z]' /etc/cinder/cinder.conf
crudini --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:111111@controller/cinder
crudini --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:111111@controller
crudini --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
crudini --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
crudini --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
crudini --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
crudini --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
crudini --set /etc/cinder/cinder.conf keystone_authtoken project_name service
crudini --set /etc/cinder/cinder.conf keystone_authtoken username cinder
crudini --set /etc/cinder/cinder.conf keystone_authtoken password 111111
crudini --set /etc/cinder/cinder.conf DEFAULT my_ip 10.0.0.11
crudini --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/nova/tmp
egrep -v "^#|^$" /etc/cinder/cinder.conf
grep '^[a-z]' /etc/cinder/cinder.conf
3.填充cinder块存储数据库
下面的命令没有输出,忽略返回信息。
su -s /bin/sh -c "cinder-manage db sync" cinder
验证数据库
mysql -ucinder -p111111 -e "use cinder;show tables;"
4.配置nova调用cinder服务
crudini --set /etc/nova/nova.conf cinder os_region_name RegionOne
检查生效的nova配置
grep '^[a-z]' /etc/nova/nova.conf |grep os_region_name
控制节点、存储节点、计算节点的配置
如果三节点部署,一定要在上同时执行,并修改
crudini --set /etc/nova/nova.conf cinder os_region_name RegionOne grep '^[a-z]' /etc/nova/nova.conf |grep os_region_name
否则会报no session的错误,无法挂载云盘。
5.重启nova-api服务
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
6.启动块存储服务,并设置开机自启
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl list-unit-files |grep openstack-cinder* |grep enabled
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
q
至此控制端的cinder服务安装完毕
在dashboard上面可以看到项目目录中多了一个卷服务,接下来安装块存储节点服务器storage node。
0x02、存储节点-Cinder存储服务组件
在存储节点服务器安装cinder存储服务
- 存储节点建议单独部署服务器(最好是物理机),测试时也可以部署在控制节点或者计算节点
- 本教程的实际操作中,依然使用控制节点作为存储节点。
- 使用LVM逻辑卷提供服务,需要提供一块空的磁盘用以创建LVM逻辑卷,在VMware软件中,将控制节点虚拟机增加一块32GB的磁盘。
在部署之前准备好存储设备。
一、存储节点准备好存储设备
0.磁盘挂载
在搭建的过程中使用的是VMware Workstation虚拟机来完成的,需要给存储节点新添加一个虚拟磁盘,并将磁盘挂载到虚拟机。
(1).分区
查看磁盘列表
fdisk -l
增加了空间的硬盘是 /dev/sdb,例子如下:
[root@controller ~]# fdisk -l
Disk /dev/sda: 64 GiB, 68719476736 bytes, 134217728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5e782499
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 2099199 2097152 1G 83 Linux
/dev/sda2 2099200 134217727 132118528 63G 8e Linux LVM
Disk /dev/sdb: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/cl-root: 40.9 GiB, 43943723008 bytes, 85827584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/cl-swap: 2.1 GiB, 2243952640 bytes, 4382720 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/cl-home: 20 GiB, 21453864960 bytes, 41902080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
(2).分区
fdisk /dev/sdb
分区过程例子如下
[root@controller ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xd8ae4d18.
Command (m for help): p
Disk /dev/sdb: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd8ae4d18
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 3
First sector (2048-67108863, default 2048): #回车
Last sector, +sectors or +size{K,M,G,T,P} (2048-67108863, default 67108863): #回车
Created a new partition 3 of type 'Linux' and of size 32 GiB.
Command (m for help): t
Selected partition 3
Hex code (type L to list all codes): w
Type 0 means free space to many systems. Having partitions of type 0 is probably unwise.
Changed type of partition 'Linux' to 'unknown'.
Command (m for help): q
[root@controller ~]#
(3).格式化
cd /dev/
partprobe
ls sd*
mkfs.ext3 /dev/sdb
cd
格式化过程例子如下
[root@controller ~]# cd /dev/
[root@controller dev]# partprobe
[root@controller dev]# ls sd*
sda sda1 sda2 sdb
[root@controller dev]# mkfs.ext3 /dev/sdb
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 8388608 4k blocks and 2097152 inodes
Filesystem UUID: ca25b565-a145-45e8-b0e1-a7cb8b6dcc0a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
[root@controller dev]#
1.安装LVM相关软件包
**注:**有的Linux发行版本默认包含lvm
安装LVM包
dnf install lvm2 device-mapper-persistent-data -y
启动LVM的metadata服务并配置开机自启动
systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service
systemctl enable lvm2-lvmetad.service
systemctl list-unit-files |grep lvm2-lvmetad |grep enabled
2.创建LVM物理卷
创建LVM物理卷/dev/sdb
fdisk -l
pvcreate /dev/sdb
例子如下
# 检查磁盘状态
[root@controller ~]# fdisk -l
# 创建LVM 物理卷 /dev/sdb
[root@controller dev]# pvcreate /dev/sdb
WARNING: ext3 signature detected on /dev/sdb at offset 1080. Wipe it? [y/n]: y
Wiping ext3 signature on /dev/sdb.
Physical volume "/dev/sdb" successfully created.
[root@controller dev]#
3.创建 LVM 卷组 cinder-volumes
(1)块存储服务会在这个卷组中创建逻辑卷
vgcreate cinder-volumes /dev/sdb
pvdisplay
4.配置过滤器,只有实例可以访问块存储卷组,防止系统出错
(1)新增磁盘sdb为LVM卷组
- 默认只会有openstack实例访问块存储卷组,不过,底层的操作系统也会管理这些设备并尝试将逻辑卷与系统关联。
- 默认情况下LVM卷扫描工具会扫描整个/dev目录,查找所有包含lvm卷的块存储设备。如果其他项目在某个磁盘设备sda,sdc等上使用了lvm卷,那么扫描工具检测到这些卷时会尝试缓存这些lvm卷,可能导致底层操作系统或者其他服务无法正常调用他们的lvm卷组,从而产生各种问题,需要手动配置LVM,让LVM卷扫描工具只扫描包含"cinder-volume"卷组的设备/dev/sdb,我这边磁盘分区都是格式化的手工分区,目前不存在这个问题,以下是配置演示
将LVM重新配置为仅扫描包含cinder-volumes卷组的设备
vim /etc/lvm/lvm.conf
vim /etc/lvm/lvm.conf
-----------------------------
# 大概在159行左右
devices {
filter = [ "a/sdb/", "r/.*/"]
}
-----------------------------
在devices部分中,添加一个接受/dev/sdb设备并拒绝所有其他设备的过滤器:
- 配置规则:
- a用于接受,r用于拒绝。
- 每个过滤器组中的元素都以a开头accept接受,或以 r 开头reject拒绝,后面连接设备名称的正则表达式规则。
- 过滤器组必须以"r/.*/"结束,过滤所有保留设备。
- 可以使用命令:vgs -vvvv来测试过滤器。
(2)、查看系统磁盘卷类型
[root@controller ~]# vgdisplay
- 注意:
- 如果存储节点的操作系统磁盘/dev/sda使用的是LVM卷组,也需要将该设备添加到过滤器中,配置文件/etc/lvm/lvm.conf添加如下:
devices {
......
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
......
}
- 如果计算节点的操作系统磁盘/dev/sda使用的是LVM卷组,也需要将该设备添加到过滤器中,配置文件/etc/lvm/lvm.conf添加如下:
devices {
......
filter = [ "a/sda/", "r/.*/"]
......
}
vgdisplay
二、存储节点cinder相关软件安装与配置
1.安装相关软件
yum info python3-keystone
dnf install openstack-cinder targetcli python3-keystone -y
2.存储节点快速修改cinder配置
/etc/cinder/cinder.conf
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
grep -Ev '#|^$' /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
crudini --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:111111@controller/cinder
crudini --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:111111@controller
crudini --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
crudini --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
crudini --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
crudini --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
crudini --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
crudini --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
crudini --set /etc/cinder/cinder.conf keystone_authtoken project_name service
crudini --set /etc/cinder/cinder.conf keystone_authtoken username cinder
crudini --set /etc/cinder/cinder.conf keystone_authtoken password 111111
crudini --set /etc/cinder/cinder.conf DEFAULT my_ip 10.0.0.11
crudini --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
crudini --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
crudini --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
crudini --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
crudini --set /etc/cinder/cinder.conf lvm target_protocol iscsi
crudini --set /etc/cinder/cinder.conf lvm target_helper lioadm
crudini --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
egrep -v "^#|^$" /etc/cinder/cinder.conf
grep '^[a-z]' /etc/cinder/cinder.conf
3.存储节点启动cinder服务并配置开机自启动
需要启动2个服务
systemctl start openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
q
systemctl enable openstack-cinder-volume.service target.service
systemctl list-unit-files |grep openstack-cinder |grep enabled
systemctl list-unit-files |grep target.service |grep enabled
至此存储节点的cinder服务安装完毕
三、(可选)安装和配置备份服务
yum install openstack-cinder
crudini --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
crudini --set /etc/cinder/cinder.conf DEFAULT backup_driver cinder.backup.drivers.swift.SwiftBackupDriver
crudini --set /etc/cinder/cinder.conf DEFAULT backup_swift_url SWIFT_URL
cd
source admin-openrc.sh
openstack catalog show object-store
systemctl start openstack-cinder-backup.service
systemctl status openstack-cinder-backup.service
q
systemctl enable openstack-cinder-backup.service
systemctl list-unit-files |grep openstack-cinder-backup.service |grep enabled
例子解释:
#安装软件包:
yum install openstack-cinder
#编辑/etc/cinder/cinder.conf文件并完成以下操作
---------------------------------------------------------------
[DEFAULT]
# ...
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = SWIFT_URL
---------------------------------------------------------------
#替换SWIFT_URL为对象存储服务的URL。可以通过显示对象库API端点来找到URL:
openstack catalog show object-store
#确定安装,启动块存储备份服务,并将其配置为在系统启动时启动:
systemctl start openstack-cinder-backup.service
systemctl status openstack-cinder-backup.service
systemctl enable openstack-cinder-backup.service
systemctl list-unit-files |grep openstack-cinder-backup.service |grep enabled
0x03、控制节点-Cinder存储服务组件
在控制节点验证cinder存储服务
一、控制节点验证Cinder服务
1.环境变量
cd
source admin-openrc.sh
2.查看存储卷列表
openstack volume service list
返回以上信息,表示cinder相关节点安装完成
至此控制端的cinder服务安装完毕。
二、使用建议
- 1.云磁盘可以进行磁盘迁移,扩容,缩容等操作,但不建议在生产环境进行尝试,测试环境可以尝试,但也要注意备份数据
- 2.涉及重要数据的话尽量还是不使用云磁盘,而采用本地磁盘存储数据,如果出现问题至少各个磁盘数据是分开的,磁盘文件还在
- 3.总的来说,使用openstack部署企业私有云,生产环境用本地磁盘就好,测试环境可以以尝试使用云磁盘
0x04、计算节点-Cinder存储服务组件
在每一个计算节点,配置计算以使用块存储。
一、计算节点配置计算以使用块存储
crudini --set /etc/nova/nova.conf cinder os_region_name RegionOne
grep '^[a-z]' /etc/nova/nova.conf |grep os_region_name
二、配置过滤器,只有实例可以访问块存储卷组,防止系统出错
[root@compute1 ~]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size <63.00 GiB
PE Size 4.00 MiB
Total PE 16127
Alloc PE / Size 16126 / 62.99 GiB
Free PE / Size 1 / 4.00 MiB
VG UUID Ws4wQS-K0VZ-vp9q-TB9Y-sLP9-OKyb-SZbTd8
[root@compute1 ~]#
- 注意:
- 如果存储节点的操作系统磁盘/dev/sda使用的是LVM卷组,也需要将该设备添加到过滤器中,配置文件/etc/lvm/lvm.conf添加如下:
devices {
......
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
......
}
- 如果计算节点的操作系统磁盘/dev/sda使用的是LVM卷组,也需要将该设备添加到过滤器中,配置文件/etc/lvm/lvm.conf添加如下:
devices {
......
filter = [ "a/sda/", "r/.*/"]
......
}
0x05、实例使用-Cinder存储服务组件
1.使用块存储服务向实例提供数据盘
创建卷(volume)创建一个1GB的卷:
cd
source admin-openrc
openstack volume create --size 1 volume1
openstack volume list
很短的时间后,卷状态应该从creating 到available
openstack volume list
可以使用命令行进行卷的管理,也可以使用dashboard进行管理.
例子:
[root@controller ~]# cd
[root@controller ~]# source admin-openrc
-bash: admin-openrc: No such file or directory
[root@controller ~]# openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2021-05-07T07:36:46.000000 |
| description | None |
| encrypted | False |
| id | e89a3dd3-827a-4443-8bc3-7d2e10e4bb1c |
| migration_status | None |
| multiattach | False |
| name | volume1 |
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | __DEFAULT__ |
| updated_at | None |
| user_id | 6a1fe3049209481fa3a69267de177178 |
+---------------------+--------------------------------------+
[root@controller ~]#
[root@controller ~]# openstack volume list
+--------------------------------------+---------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+---------+-----------+------+-------------+
| e89a3dd3-827a-4443-8bc3-7d2e10e4bb1c | volume1 | available | 1 | |
+--------------------------------------+---------+-----------+------+-------------+
[root@controller ~]# openstack volume list
+--------------------------------------+---------+-----------+------+----------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+---------+-----------+------+----------------------------+
| 2726c6b8-329f-4b55-942a-cd8cfd019798 | volume2 | available | 1 | |
| e89a3dd3-827a-4443-8bc3-7d2e10e4bb1c | volume1 | in-use | 1 | Attached to 1 on /dev/vdb |
+--------------------------------------+---------+-----------+------+----------------------------+
[root@controller ~]# openstack volume list
+--------------------------------------+---------+--------+------+----------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+---------+--------+------+----------------------------------+
| 2726c6b8-329f-4b55-942a-cd8cfd019798 | volume2 | in-use | 1 | Attached to cirros2 on /dev/vdb |
| e89a3dd3-827a-4443-8bc3-7d2e10e4bb1c | volume1 | in-use | 1 | Attached to 1 on /dev/vdb |
+--------------------------------------+---------+--------+------+----------------------------------+
[root@controller ~]# openstack volume list
+--------------------------------------+---------+--------+------+----------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+---------+--------+------+----------------------------------+
| 2726c6b8-329f-4b55-942a-cd8cfd019798 | volume2 | in-use | 1 | Attached to cirros2 on /dev/vdb |
| e89a3dd3-827a-4443-8bc3-7d2e10e4bb1c | volume1 | in-use | 1 | Attached to 1 on /dev/vdb |
+--------------------------------------+---------+--------+------+----------------------------------+
[root@controller ~]#
再后面就是将新建的卷挂载到具体的实例上面,在实例中在进行硬盘的挂载、格式化、分区等操作。
2.将卷附加到实例
openstack server add volume INSTANCE_NAME VOLUME_NAME
将volume1卷附加到实例名称为:cirros2的虚拟机
[root@controller ~]# openstack server add volume cirros2 volume1
[root@controller ~]# openstack volume list
3.登录实例,并使用以下fdisk命令验证该卷是否作为/dev/vdb块存储设备
sudo fdisk -l
分区并格式化新添加的/dev/vdb
$ sudo fdisk /dev/vdb
Command (m for help): n #创建一个新分区
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p #创建一个主分区
Partition number (1-4, default 1): #分区默认编号为1
First sector (2048-2097151, default 2048): #磁盘分区中第一个扇区(从哪里开始) 默认的
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-2097151, default 2097151): #磁盘分区中最后1个扇区的位置 默认全部
Command (m for help): w #保存
查看创建的主分区
$ ls /dev/vdb*
/dev/vdb /dev/vdb1
格式化 创建文件系统
mkfs.ext4 /dev/vdb1
临时挂载
$ sudo mount /dev/vdb1 /mnt/
$ df -h|tail -1
/dev/vdb1 990.9M 2.5M 921.2M 0% /mnt
永久挂载
$ sudo su -
# echo '/dev/vdb1 /mnt/ xfs defaults 0 0' >>/etc/fstab
至此cinder服务的全部安装与测试完成。