OpenStack Victoria 集群部署-接入ceph-pacific集群 - Ubuntu20.04

1 前言

参考:ceph接入openstack配置
#本文解释从Netonline大佬转载过来的,用来解释接入集群的一些配置
Openstack环境中,数据存储可分为临时性存储与永久性存储。

临时性存储:主要由本地文件系统提供,并主要用于nova虚拟机的本地系统与临时数据盘,以及存储glance上传的系统镜像;

永久性存储:主要由cinder提供的块存储与swift提供的对象存储构成,以cinder提供的块存储应用最为广泛,块存储通常以云盘的形式挂载到虚拟机中使用。

Openstack中需要进行数据存储的三大项目主要是nova项目(虚拟机镜像文件),glance项目(共用模版镜像)与cinder项目(块存储)。

下图为cinder,glance与nova访问ceph集群的逻辑图:

ceph与openstack集成主要用到ceph的rbd服务,ceph底层为rados存储集群,ceph通过librados库实现对底层rados的访问;

openstack各项目客户端调用librbd,再由librbd调用librados访问底层rados;
实际使用中,nova需要使用libvirtdriver驱动以通过libvirt与qemu调用librbd;cinder与glance可直接调用librbd;

写入ceph集群的数据被条带切分成多个object,object通过hash函数映射到pg(构成pg容器池pool),然后pg通过几圈crush算法近似均匀地映射到物理存储设备osd(osd是基于文件系统的物理存储设备,如xfs,ext4等)。
在这里插入图片描述

2 openstack集群上的操作

#在openstack所有控制和计算节点安装ceph pacific源,ubuntu20.04默认安装ceph15,但是版本一定要跟你连接的ceph版本一模一样!,这边采用的是ceph16。

#但是我这里使用的是pacific16,因此

wget -q -O- 'http://mirrors.ustc.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
echo deb http://mirrors.ustc.edu.cn/ceph/debian-pacific/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
apt-get update

#所有控制节点及计算节点都安装

apt install python3-rbd python3-rados librados-dev ceph-common -y

#所有cinder节点安装,这里的两个组件都直接部署在计算节点上

apt install cinder-backup cinder-volume -y

3 ceph集群上的操作

#ceph集群创建,在Ubuntu20.04部署ceph16(pacific)集群 - 傻瓜式教程我有提供
#在ceph集群和openstack集群添加hosts
#vim /etc/hosts

192.168.1.3 controller003
192.168.1.4 controller004
192.168.1.5 controller005
192.168.1.100 controller100
192.168.1.7 neutron007
192.168.1.8 neutron008
192.168.1.9 ceph009
192.168.1.10 ceph010
192.168.1.11 ceph011
192.168.1.13 node013
192.168.1.14 node014
192.168.1.15 node015

3.1 创建openstack集群将要使用的pool

#Ceph默认使用pool的形式存储数据,pool是对若干pg进行组织管理的逻辑划分,pg里的对象被映射到不同的osd,因此pool分布到整个集群里。
#可以将不同的数据存入1个pool,但如此操作不便于客户端数据区分管理,因此一般是为每个客户端分别创建pool。
#为cinder,nova,glance分别创建pool,命名为:volumes,vms,images
#这里volumes池是永久性存储,vms是实例临时后端存储,images是镜像存储
#pg数是有算法的,可以使用官网计算器去计算!
#PG数量的预估 集群中单个池的PG数计算公式如下:PG 总数 = (OSD 数 * 100) / 最大副本数 / 池数 (结果必须舍入到最接近2的N次幂的值)

#在ceph集群里操作,创建pools

root@ceph009:~/cephcluster# ceph osd pool create volumes 128 128 replicated
pool 'volumes' created
root@ceph009:~/cephcluster# ceph osd pool create vms 128 128 replicated
pool 'vms' created
root@ceph009:~/cephcluster# ceph osd pool create images 128 128 replicated
pool 'images' created
root@ceph009:~/cephcluster# ceph osd pool create backups 128 128 replicated
pool 'backups' created
root@ceph009:~/cephcluster# ceph osd lspools
1 device_health_metrics
2 .rgw.root
3 default.rgw.log
4 default.rgw.control
5 default.rgw.meta
6 cephfs_data
7 cephfs_metadata
8 rbd_storage
11 volumes
12 vms
13 images
14 backups

3.2 ceph授权设置

3.2.1 创建用户

#ceph默认启用cephx authentication,需要为nova/cinder与glance客户端创建新的用户并授权;
#可在管理节点上分别为运行cinder-volume、glance-api和cinder-backup服务的节点创建client.glance、client.cinder及client.cinder-backup用户并设置权限;
#针对pool设置权限,pool名对应创建的pool

root@ceph009:~/cephcluster# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
	key = AQATPHlgyFVJIhAAG2aDGXJn8Kd5pcZk7Ljw4w==
root@ceph009:~/cephcluster# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
	key = AQAkPHlgsolOMBAAFVTlH9f8ivWJkbOhqxMZVQ==
root@ceph009:~/cephcluster# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
[client.cinder-backup]
	key = AQBGSnpgsUM5DxAAgHy4g33TqqZxJke4JxcuJQ==
3.2.2 推送client.glance&client.cinder秘钥

#配置节点免密操作

[cephdeploy@ceph131 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:CsvXYKm8mRzasMFwgWVLx5LvvfnPrRc5S1wSb6kPytM root@ceph131
The key's randomart image is:
+---[RSA 2048]----+
|  +o.            |
| =oo.       .    |
|. oo         o . |
|   ..  .    . =  |
|. ....+ S  . *   |
| + o.=.+    O    |
|  + * oo.. + *   |
|   B *o  .+.E .  |
|  o *  ...++.    |
+----[SHA256]-----+

#推送密钥至各openstack集群节点

ssh-copy-id root@controller003
ssh-copy-id root@controller004
ssh-copy-id root@controller005
ssh-copy-id root@node013
ssh-copy-id root@node014
ssh-copy-id root@node015

#这里nova-compute服务与nova-volume服务运行在相同节点,不必重复操作。

#将创建client.glance用户生成的秘钥推送到运行glance-api服务的节点
root@ceph009:~/cephcluster# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
	key = AQAkPHlgsolOMBAAFVTlH9f8ivWJkbOhqxMZVQ==
root@ceph009:~/cephcluster# ceph auth get-or-create client.glance | ssh root@controller003 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
	key = AQAkPHlgsolOMBAAFVTlH9f8ivWJkbOhqxMZVQ==
root@ceph009:~/cephcluster# ceph auth get-or-create client.glance | ssh root@controller004 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
	key = AQAkPHlgsolOMBAAFVTlH9f8ivWJkbOhqxMZVQ==
root@ceph009:~/cephcluster# ceph auth get-or-create client.glance | ssh root@controller005 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
	key = AQAkPHlgsolOMBAAFVTlH9f8ivWJkbOhqxMZVQ==
#同时修改秘钥文件的属主与用户组
#chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller003 chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller004 chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller005 chown glance:glance /etc/ceph/ceph.client.glance.keyring

#将创建client.cinder用户生成的秘钥推送到运行cinder-volume服务的节点
root@ceph009:~/cephcluster# ceph auth get-or-create client.cinder | ssh root@node013 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
	key = AQATPHlgyFVJIhAAG2aDGXJn8Kd5pcZk7Ljw4w==
root@ceph009:~/cephcluster# ceph auth get-or-create client.cinder | ssh root@node014 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
	key = AQATPHlgyFVJIhAAG2aDGXJn8Kd5pcZk7Ljw4w==
root@ceph009:~/cephcluster# ceph auth get-or-create client.cinder | ssh root@node015 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
	key = AQATPHlgyFVJIhAAG2aDGXJn8Kd5pcZk7Ljw4w==
#同时修改秘钥文件的属主与用户组
ssh root@node013 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ssh root@node014 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ssh root@node015 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
#将创建client.cinder-backup用户生成的秘钥推送到运行cinder-volume服务的节点
root@ceph009:~/cephcluster# ceph auth get-or-create client.cinder-backup |ssh root@node013 tee /etc/ceph/ceph.client.cinder-backup.keyring
[client.cinder-backup]
	key = AQBGSnpgsUM5DxAAgHy4g33TqqZxJke4JxcuJQ==
root@ceph009:~/cephcluster# ceph auth get-or-create client.cinder-backup |ssh root@node014 tee /etc/ceph/ceph.client.cinder-backup.keyring
[client.cinder-backup]
	key = AQBGSnpgsUM5DxAAgHy4g33TqqZxJke4JxcuJQ==
root@ceph009:~/cephcluster# ceph auth get-or-create client.cinder-backup |ssh root@node015 tee /etc/ceph/ceph.client.cinder-backup.keyring
[client.cinder-backup]
	key = AQBGSnpgsUM5DxAAgHy4g33TqqZxJke4JxcuJQ==
#同时修改秘钥文件的属主与用户组
root@ceph009:~/cephcluster# ssh root@node013 chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
root@ceph009:~/cephcluster# ssh root@node014 chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
root@ceph009:~/cephcluster# ssh root@node015 chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
3.2.3 libvirt秘钥

#nova-compute所在节点需要将client.cinder用户的秘钥文件存储到libvirt中;当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到该秘钥以访问ceph集群;

#在管理节点向计算(存储)节点推送client.cinder秘钥文件,生成的文件是临时性的,将秘钥添加到libvirt后可删除

root@ceph009:~/cephcluster# ceph auth get-key client.cinder | ssh root@node013 tee /etc/ceph/client.cinder.key
AQATPHlgyFVJIhAAG2aDGXJn8Kd5pcZk7Ljw4w==
root@ceph009:~/cephcluster# ceph auth get-key client.cinder | ssh root@node014 tee /etc/ceph/client.cinder.key
AQATPHlgyFVJIhAAG2aDGXJn8Kd5pcZk7Ljw4w==
root@ceph009:~/cephcluster# ceph auth get-key client.cinder | ssh root@node015 tee /etc/ceph/client.cinder.key
AQATPHlgyFVJIhAAG2aDGXJn8Kd5pcZk7Ljw4w==

#在计算(存储)节点将秘钥加入libvirt,以compute163节点为例;
#首先生成1个uuid,全部计算(存储)节点可共用此uuid(其他节点不用操作此步)
#uuid后续配置nova.conf文件时也会用到,请保持一致

root@node013:~# cd /etc/ceph/
root@node013:/etc/ceph# uuidgen
3dfa5e42-3597-4391-936b-83c72490839f
root@node013:/etc/ceph# touch secret.xml
root@node013:/etc/ceph# vim secret.xml

<secret ephemeral='no' private='no'>
        <uuid>3dfa5e42-3597-4391-936b-83c72490839f</uuid>
        <usage type='ceph'>
                <name>client.cinder secret</name>
        </usage>
</secret>

root@node013:/etc/ceph# virsh secret-define --file secret.xml
Secret 3dfa5e42-3597-4391-936b-83c72490839f created

root@node013:/etc/ceph# virsh secret-set-value --secret 3dfa5e42-3597-4391-936b-83c72490839f --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set

#推送ceph.conf

root@ceph009:/etc/ceph# scp ceph.conf root@controller003:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  532     1.1MB/s   00:00
root@ceph009:/etc/ceph# scp ceph.conf root@controller004:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  532     1.1MB/s   00:00
root@ceph009:/etc/ceph# scp ceph.conf root@controller005:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  532     1.3MB/s   00:00
root@ceph009:/etc/ceph# scp ceph.conf root@node013:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  532   745.5KB/s   00:00
root@ceph009:/etc/ceph# scp ceph.conf root@node014:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  532   604.8KB/s   00:00
root@ceph009:/etc/ceph# scp ceph.conf root@node015:/etc/ceph/
ceph.conf

4 Glance集成Ceph

4.1 配置glance-api.conf

#在运行glance-api服务的节点修改glance-api.conf文件,含3个控制节点,以controller003节点为例
#以下只列出涉及glance集成ceph的相关
#vim /etc/glance/glance-api.conf

[DEFAULT]
#打开copy-on-write功能
show_image_direct_url = True
[glance_store]
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
#stores = file,http
#default_store = file
#filesystem_store_datadir = /var/lib/glance/images/

#变更配置文件,重启服务

systemctl restart glance-api.service

#上传cirros镜像

root@controller003:~# glance image-create --name "rbd_cirros-0.5.2-x86_64-disk"   --file cirros-0.5.2-x86_64-disk.img   --disk-format qcow2 --container-format bare   --visibility=public
+------------------+----------------------------------------------------------------------------------+
| Property         | Value                                                                            |
+------------------+----------------------------------------------------------------------------------+
| checksum         | b874c39491a2377b8490f5f1e89761a4                                                 |
| container_format | bare                                                                             |
| created_at       | 2021-04-16T07:56:07Z                                                             |
| direct_url       | rbd://1671d660-cb92-42a6-afb5-fdfcd2a94b43/images/27209d95-2172-47dd-83b5-2f1201 |
|                  | 2feb05/snap                                                                      |
| disk_format      | qcow2                                                                            |
| id               | 27209d95-2172-47dd-83b5-2f12012feb05                                             |
| min_disk         | 0                                                                                |
| min_ram          | 0                                                                                |
| name             | rbd_cirros-0.5.2-x86_64-disk                                                     |
| os_hash_algo     | sha512                                                                           |
| os_hash_value    | 6b813aa46bb90b4da216a4d19376593fa3f4fc7e617f03a92b7fe11e9a3981cbe8f0959dbebe3622 |
|                  | 5e5f53dc4492341a4863cac4ed1ee0909f3fc78ef9c3e869                                 |
| os_hidden        | False                                                                            |
| owner            | 65e780c77cd246128e54aa27115182ad                                                 |
| protected        | False                                                                            |
| size             | 16300544                                                                         |
| status           | active                                                                           |
| tags             | []                                                                               |
| updated_at       | 2021-04-16T07:56:11Z                                                             |
| virtual_size     | 117440512                                                                        |
| visibility       | public                                                                           |
+------------------+----------------------------------------------------------------------------------+
#远程查看images pool有没有此文件ID,要能够在controller节点使用rbd命令,需要安装ceph-common
root@controller003:~# rbd -p images --id glance -k /etc/ceph/ceph.client.glance.keyring ls
27209d95-2172-47dd-83b5-2f12012feb05
root@ceph009:/etc/ceph# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
ssd    5.6 TiB  5.6 TiB  371 MiB   371 MiB          0
TOTAL  5.6 TiB  5.6 TiB  371 MiB   371 MiB          0

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics   1    1      0 B        6      0 B      0    2.7 TiB
.rgw.root               2   32  1.3 KiB        4   32 KiB      0    2.7 TiB
default.rgw.log         3   32   23 KiB      335  1.2 MiB      0    2.7 TiB
default.rgw.control     4   32      0 B        8      0 B      0    2.7 TiB
default.rgw.meta        5    8    373 B        2   16 KiB      0    2.7 TiB
cephfs_data             6   16      0 B        0      0 B      0    2.7 TiB
cephfs_metadata         7   16  4.7 KiB       22   64 KiB      0    2.7 TiB
rbd_storage             8   16  620 KiB       12  1.2 MiB      0    2.7 TiB
volumes                11   32      0 B        0      0 B      0    2.7 TiB
vms                    12   32      0 B        0      0 B      0    2.7 TiB
images                 13   32   16 MiB        8   31 MiB      0    2.7 TiB
root@ceph009:/etc/ceph# rbd ls images
27209d95-2172-47dd-83b5-2f12012feb05

#查看ceph集群,发现有个HEALTH_WARN,原因是刚刚创建的未定义pool池类型,可定义为’cephfs’, ‘rbd’, 'rgw’等

root@ceph009:/etc/ceph# ceph -s
  cluster:
    id:     1671d660-cb92-42a6-afb5-fdfcd2a94b43
    health: HEALTH_WARN
            1 pool(s) do not have an application enabled

  services:
    mon: 3 daemons, quorum ceph009,ceph010,ceph011 (age 48m)
    mgr: ceph010(active, since 20h), standbys: ceph011, ceph009
    mds: 1/1 daemons up
    osd: 6 osds: 6 up (since 22h), 6 in (since 22h)
    rgw: 3 daemons active (3 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   11 pools, 249 pgs
    objects: 397 objects, 19 MiB
    usage:   371 MiB used, 5.6 TiB / 5.6 TiB avail
    pgs:     249 active+clean

root@ceph009:/etc/ceph# ceph health detail
HEALTH_WARN 1 pool(s) do not have an application enabled
[WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled
    application not enabled on pool 'images'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.

#解决方法,定义pool类型为rbd

root@ceph009:/etc/ceph# ceph osd pool application enable images rbd
enabled application 'rbd' on pool 'images'
root@ceph009:/etc/ceph# ceph osd pool application enable volumes rbd
enabled application 'rbd' on pool 'volumes'
root@ceph009:/etc/ceph# ceph osd pool application enable vms rbd
enabled application 'rbd' on pool 'vms'

#验证方法如下:

root@ceph009:/etc/ceph# ceph health detail
HEALTH_OK
root@ceph009:/etc/ceph# ceph osd pool application get images
{
    "rbd": {}
}

5 Cinder集成Ceph

4.1 配置cinder.conf

#这里引入了个bug,官网已经有修复,有问题的可以按照官网手动解决stable/victoria: RBD: Pass bytes type for mon_command inbuf(https://review.opendev.org/c/openstack/cinder/+/773694)
#cinder利用插件式结构,支持同时使用多种后端存储,在cinder-volume所在节点设置cinder.conf中设置相应的ceph rbd驱动即可,以compute163为例
#vim /etc/cinder/cinder.conf

# 后端使用ceph存储
[DEFAULT]
#enabled_backends = lvm #注释掉本行
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
#注意替换uuid
rbd_secret_uuid = 3dfa5e42-3597-4391-936b-83c72490839f
volume_backend_name = ceph

#修改ceph.conf配置,添加如下字段

[client.cinder-backup]
keyring = /etc/ceph/ceph.client.cinder-backup.keyring

#修改nova.conf配置,添加cinder-backup如下字段

[DEFAULT]
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

#变更配置文件,重启服务

systemctl enable cinder-backup.service
systemctl enable cinder-volume.service
systemctl restart cinder-backup.service
systemctl restart cinder-volume.service
systemctl restart nova-compute.service

#验证

root@controller003:~# openstack volume service list
+------------------+--------------------+------+---------+-------+----------------------------+
| Binary           | Host               | Zone | Status  | State | Updated At                 |
+------------------+--------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller003      | nova | enabled | up    | 2021-04-17T03:24:08.000000 |
| cinder-scheduler | controller005      | nova | enabled | up    | 2021-04-17T03:24:12.000000 |
| cinder-scheduler | controller004      | nova | enabled | up    | 2021-04-17T03:24:04.000000 |
| cinder-volume    | node013@ceph       | nova | enabled | up    | 2021-04-17T03:24:10.000000 |
| cinder-backup    | node013            | nova | enabled | up    | 2021-04-17T03:24:08.000000 |
| cinder-volume    | node014@ceph       | nova | enabled | up    | 2021-04-17T03:24:06.000000 |
| cinder-volume    | node015@ceph       | nova | enabled | up    | 2021-04-17T03:24:08.000000 |
| cinder-backup    | node015            | nova | enabled | up    | 2021-04-17T03:24:10.000000 |
| cinder-backup    | node014            | nova | enabled | up    | 2021-04-17T03:24:07.000000 |
+------------------+--------------------+------+---------+-------+----------------------------+

4.2 创建一个volume

#设置卷类型,在控制节点为cinder的ceph后端存储创建对应的type,在配置多存储后端时可区分类型;可通过“cinder type-list”查看

root@controller003:/etc/ceph# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID                                   | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 91fbbe4c-6ca3-4bd9-be58-8f9cd3456116 | ceph | -           | True      |
+--------------------------------------+------+-------------+-----------+

#为ceph type设置扩展规格,键值” volume_backend_name”,value值”ceph”

root@controller003:~# cinder type-key ceph set volume_backend_name=ceph
root@controller003:~# cinder extra-specs-list
+--------------------------------------+-------------+---------------------------------+
| ID                                   | Name        | extra_specs                     |
+--------------------------------------+-------------+---------------------------------+
| 91fbbe4c-6ca3-4bd9-be58-8f9cd3456116 | ceph        | {'volume_backend_name': 'ceph'} |
| f480c11d-d8b6-473c-bc99-33479b81b53a | __DEFAULT__ | {}                              |
+--------------------------------------+-------------+---------------------------------+

#创建一个volume

root@controller003:~# cinder create --volume-type ceph --name ceph-volume 1
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| cluster_name                   | None                                 |
| consistencygroup_id            | None                                 |
| created_at                     | 2021-04-17T02:34:15.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| group_id                       | None                                 |
| id                             | b7052e31-03fc-47a9-abca-5494ee189229 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | ceph-volume                          |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 65e780c77cd246128e54aa27115182ad     |
| provider_id                    | None                                 |
| replication_status             | None                                 |
| service_uuid                   | None                                 |
| shared_targets                 | True                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | 7d78dad4d4c840abaf159dc7901fcdc9     |
| volume_type                    | ceph                                 |
+--------------------------------+--------------------------------------+

#验证

root@controller003:~# openstack volume list
+--------------------------------------+-------------+-----------+------+-------------+
| ID                                   | Name        | Status    | Size | Attached to |
+--------------------------------------+-------------+-----------+------+-------------+
| b7052e31-03fc-47a9-abca-5494ee189229 | ceph-volume | available |    1 |             |
+--------------------------------------+-------------+-----------+------+-------------+
root@ceph009:~/cephcluster# rbd ls volumes
volume-b7052e31-03fc-47a9-abca-5494ee189229

6 Nova集成Ceph

6.1 配置ceph.conf

#如果需要从ceph rbd中启动虚拟机,必须将ceph配置为nova的临时后端;
#推荐在计算节点的配置文件中启用rbd cache功能;
#为了便于故障排查,配置admin socket参数,这样每个使用ceph rbd的虚拟机都有1个socket将有利于虚拟机性能分析与故障解决;
#相关配置只涉及全部计算节点ceph.conf文件的[client]与[client.cinder]字段,以compute163节点为例

root@node013:/etc/ceph# vim /etc/ceph/ceph.conf
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20

[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring

# 创建ceph.conf文件中指定的socker与log相关的目录
root@node013:/etc/ceph#  mkdir -p /var/run/ceph/guests/ /var/log/qemu/

6.2 配置nova.conf

#在全部计算节点配置nova后端使用ceph集群的vms池,以compute163节点为例

[root@compute01 ~]# vim /etc/nova/nova.conf

[DEFAULT]
vif_plugging_is_fatal = False  
vif_plugging_timeout = 0 
[libvirt]
virt_type = kvm
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 3dfa5e42-3597-4391-936b-83c72490839f
disk_cachemodes="network=writeback"
block_migration_flag ="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_NON_SHARED_INC"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
inject_password = false
inject_key = false
inject_partition = -2
hw_disk_discard = unmap

#变更配置文件,重启计算服务

systemctl restart libvirtd.service nova-compute.service
systemctl status libvirtd.service nova-compute.service

6.4 验证是否集成

6.4.1创建基于ceph存储的bootable存储卷

#当nova从rbd启动instance时,镜像格式必须是raw格式,否则虚拟机在启动时glance-api与cinder均会报错;
#首先进行格式转换,将*.img文件转换为*.raw文件
#cirros-0.4.0-x86_64-disk.img 这个文件网上自己下载

root@controller003:~# qemu-img convert -f qcow2 -O raw ~/cirros-0.5.2-x86_64-disk.img ~/cirros-0.5.2-x86_64-disk.raw

# 生成raw格式镜像
root@controller003:~# openstack image create "cirros-raw" \
>  --file ~/cirros-0.5.2-x86_64-disk.raw \
>  --disk-format raw --container-format bare \
>  --public
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                          |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare                                                                                                                                           |
| created_at       | 2021-04-17T06:36:47Z                                                                                                                           |
| disk_format      | raw                                                                                                                                            |
| file             | /v2/images/de642519-8d23-4e41-b5ce-a0234cb94bc3/file                                                                                           |
| id               | de642519-8d23-4e41-b5ce-a0234cb94bc3                                                                                                           |
| min_disk         | 0                                                                                                                                              |
| min_ram          | 0                                                                                                                                              |
| name             | cirros-raw                                                                                                                                     |
| owner            | 65e780c77cd246128e54aa27115182ad                                                                                                               |
| properties       | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/cirros-raw', owner_specified.openstack.sha256='' |
| protected        | False                                                                                                                                          |
| schema           | /v2/schemas/image                                                                                                                              |
| status           | queued                                                                                                                                         |
| tags             |                                                                                                                                                |
| updated_at       | 2021-04-17T06:36:47Z                                                                                                                           |
| visibility       | public                                                                                                                                         |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------+

X.过程中遇到的问题

eg1.2020-07-04 00:39:56.394 671959 ERROR glance.common.wsgi rados.ObjectNotFound: [errno 2] error calling conf_read_file
原因是:找不到ceph.conf配置文件
解决方案:从ceph集群复制ceph.conf配置至各节点/etc/ceph/里面

eg2.2020-07-04 01:01:27.736 1882718 ERROR glance_store._drivers.rbd [req-fd768a6d-e7e2-476b-b1d3-d405d7a560f2 ec8c820dba1046f6a9d940201cf8cb06 d3dda47e8c354d86b17085f9e382948b - default default] Error con
necting to ceph cluster.: rados.ObjectNotFound: [errno 2] error connecting to the cluster

eg3.libvirtd[580770]: --listen parameter not permitted with systemd activation sockets, see 'man libvirtd' for further guidance
原因是:默认使用了systemd模式,要恢复到传统模式,所有的systemd必须被屏蔽
解决方案:
systemctl mask libvirtd.socket libvirtd-ro.socket \
   libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
然后使用以下命令重启即可:
service libvirtd restart

eg4.AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/guests/ceph-client.cinder.596406.94105140863224.asok': (13) Permission denied
原因是:这个是因为/var/run/ceph目录权限有问题,qemu起的这些虚拟机示例,其属主属组都是qemu,但是/var/run/ceph目录的属主属组是ceph:ceph,权限是770
解决方案:直接将/var/run/ceph目录的权限改为777,另外,/var/log/qemu/也最好设置一下权限,设置为777

eg5.Error on AMQP connection <0.9284.0> (172.16.1.162:55008 -> 172.16.1.162:5672, state: starting):
AMQPLAIN login refused: user 'guest' can only connect via localhost
原因是:从3版本开始不支持guest远程登陆
解决方案:vim /etc/rabbitmq/rabbitmq.config添加以下字段,有点!!然后重启rabbitmq服务
[{rabbit, [{loopback_users, []}]}].

eg6.Failed to allocate network(s): nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
在计算节点的/etc/nova/nova.conf中添加下面两句,然后重启
vif_plugging_is_fatal = False  
vif_plugging_timeout = 0  

eg7.Resize error: not able to execute ssh command: Unexpected error while running command.
原因是:openstack迁移是以ssh为基础,因此需要计算节点间都配置免密
解决方案:见6.3.3
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值