Glance with Cinder LVM-backed storage (by quqi99)

作者:张华 发表于:2021-12-17
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
( http://blog.csdn.net/quqi99 )

本文是将image存储到cinder的LVM存储中,主要为了测试:https://review.opendev.org/c/openstack/charm-glance/+/814882

注:其实这个标题叫’Glance with Cinder iSCSI/FC-backed storage’比’Glance with Cinder LVM-backed storage’要更合适,因为:

  • cinder支持iscsi, ceph, 虽然glance可以直接使用ceph,但这里却没有测试glance->cinder->ceph这条路径,所以保险起见,应该排除cinder ceph
  • lvm是本地存储,远程使用需结合iscsi, 使用’iSCSI/FC’也算是包括了lvm吧,至于nfs也可通过iscsi来使用的

Setup test env

使用juju来部署的话,“juju config cinder block-device=‘None’”将确保cinder charm默认创建LVM-default这个backend, 除了部署cinder-lvm, "juju add-relation cinder:cinder-volume-service glance:cinder-volume-service"这句也很重要

juju config cinder block-device='None'
juju deploy glance
juju add-relation glance keystone
juju add-relation glance mysql
juju add-relation glance nova-cloud-controller
juju add-relation cinder:image-service glance:image-service
juju add-relation cinder:cinder-volume-service glance:cinder-volume-service
juju add-relation cinder-lvm:storage-backend cinder:storage-backend
# First create a common openstack env with cinder and glance, but without ceph
juju deploy cinder-lvm
juju config cinder-lvm block-device='/tmp/vol1|4G'
juju config cinder-lvm overwrite=true
juju config cinder-lvm ephemeral-unmount='/mnt'
juju config cinder-lvm allocation-type='auto'
juju config cinder block-device='None'
juju add-relation cinder-lvm cinder
juju add-relation cinder:cinder-volume-service glance:cinder-volume-service
#https://review.opendev.org/c/openstack/charm-glance/+/814882
git clone https://github.com/openstack/charm-glance.git glance
cd glance
git fetch https://review.opendev.org/openstack/charm-glance refs/changes/82/814882/11 && git checkout FETCH_HEAD
juju upgrade-charm glance --path $PWD
#openstack volume type create cinder --property volume_backend_name=LVM-default
cinder type-create cinder && cinder type-key cinder set volume_backend_name=LVM-default
cinder service-list
cinder create --display_name test_volume --volume_type cinder 1
juju config glance cinder-volume-types='cinder'

# Glance with Multiple Backend Stores
#http_proxy=http://squid.internal:3128 wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
#openstack image create --disk-format=qcow2 --container-format=bare --public cirros --file ./cirros-0.5.1-x86_64-disk.img
glance --os-image-api-version 2 image-create --name cirros --disk-format qcow2 --container-format bare --file ./cirros-0.5.1-x86_64-disk.img --store cinder

Verify configuration

cinder.conf
[DEFAULT]
enabled_backends = LVM-default
[LVM-default]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir = /var/lib/cinder/volumes
volume_name_template = volume-%s
volume_group = cinder-volumes-default
volume_backend_name = LVM-default
lvm_type = auto
volume_clear = zero
volume_clear_size = 0

glance-api.conf
[DEFAULT]
enabled_backends = local:file, cinder:cinder
[glance_store]
default_backend = cinder
[cinder]
cinder_volume_type = cinder

juju run -u mysql/leader leader-get mysql.passwd
sudo mysql -uroot -p
select * from images;
mysql> select image_id,value from image_locations;
+--------------------------------------+-----------------------------------------------+
| image_id                             | value                                         |
+--------------------------------------+-----------------------------------------------+
| 7b6d1e53-6bc0-4080-b7d5-5563c306706f | cinder://d6c1d34d-9ae6-40d8-a7ec-448c6c96e1fb |
+--------------------------------------+-----------------------------------------------+

20231221 - cinder-ceph backup

使用cinder-backup组件来访问ceph volume作备份。首先cinder先得对接ceph(cinder-ceph charm),再对接cinder-backup。

./generate-bundle.sh --name ceph -r yoga -s focal --ceph --cinder-volume --num-compute 1 --run                        
juju add-relation cinder-ceph cinder  #use cinder-ceph instead of LVM as enabled_backends
juju deploy --series focal cinder-backup --channel yoga/stable                  
juju add-relation cinder-backup:backup-backend cinder:backup-backend            
juju add-relation cinder-backup:ceph ceph-mon:client
source ~/novarc && ./configure                                                  
source novarc

注意:上面记得用"juju add-relation cinder-ceph cinder"来将默认的backends从LVM改成cinder-ceph。此时用’openstack volume service list’查看时会看到和LVM相关的服务已经变成down了。

$ openstack volume service list
+------------------+---------------------------+------+---------+-------+----------------------------+
| Binary           | Host                      | Zone | Status  | State | Updated At                 |
+------------------+---------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | juju-d7934b-ceph-4        | nova | enabled | down  | 2023-12-21T06:22:48.000000 |
| cinder-volume    | juju-d7934b-ceph-5@LVM    | nova | enabled | down  | 2023-12-19T08:39:32.000000 |
| cinder-volume    | cinder-volume@cinder-ceph | nova | enabled | up    | 2023-12-21T06:31:55.000000 |
| cinder-backup    | juju-d7934b-ceph-4        | nova | enabled | down  | 2023-12-21T03:07:46.000000 |
| cinder-scheduler | cinder                    | nova | enabled | up    | 2023-12-21T06:31:57.000000 |
| cinder-backup    | cinder                    | nova | enabled | up    | 2023-12-21T06:31:59.000000 |
+------------------+---------------------------+------+---------+-------+----------------------------+

这是cinder.conf的配置:

[DEFAULT]
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_conf = /var/lib/charm/cinder-backup/ceph.conf
backup_ceph_pool = cinder-backup
backup_ceph_user = cinder-backup
host = cinder
enabled_backends = cinder-ceph

[backend_defaults]

[cinder-ceph]
volume_backend_name = cinder-ceph
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = cinder-ceph
rbd_user = cinder-ceph
rbd_secret_uuid = 656eee8f-cab0-467e-b5d7-999e97316289
rbd_ceph_conf = /var/lib/charm/cinder-ceph/ceph.conf
report_discard_supported = True
rbd_exclusive_cinder_pool = True
rbd_flatten_volume_from_snapshot = False

然后创建一个测试用的volume, 注意这个volume一定得是ceph类型,不能是LVM类型,所以得先创建一个名为ceph_type的volume type.

openstack volume type create --public --property volume_backend_name="cinder-ceph" ceph_type
openstack volume type list
openstack volume create --type ceph_type --size 1 ceph_test_vol1

然后做backup:

openstack volume backup create --force ceph_test_vol1
openstack volume backup show 0b6496ee-e3a4-4dd6-81fe-53099d9011c3
openstack volume backup list

确认成功:

juju ssh ceph-mon/0
sudo rados lspools
sudo rados -p cinder-backup ls -

20240105 - cinder-backup backup 2

客户说上面的测试有问题,似乎是cinder-backup处有heartbeat问题,我检查了它的环境有cinder@cinder-ceph而没有cinder-volume@cinder-ceph, 我开始误以为是这个cinder-volume的问题,所以做了下列测试,证明与这无关. cinder-volume实际上只是一个别名它也是cinder而已。
那么它的环境的问题可能是刚开始compute节点内核的一个问题没升级,以及ovn版本不一致, ceph版本不一致导致。通过升级消除了这些再问题,再添加cinder-backup应该是没有问题的.

1, deploy cinder+ceph, we didn't use cinder-volume, so it's cinder@cinder-ceph rather than cinder-volume@cinder-ceph

./generate-bundle.sh --name ha -s jammy --ceph --num-compute 1 --cinder-ha --keystone-ha --run
juju config cinder enabled-services
source ~/novarc && ./configure
source novarc

$ openstack volume service list
+------------------+----------------------+------+---------+-------+----------------------------+
| Binary           | Host                 | Zone | Status  | State | Updated At                 |
+------------------+----------------------+------+---------+-------+----------------------------+
| cinder-volume    | juju-deefc6-ha-6@LVM | nova | enabled | down  | 2024-01-03T08:30:13.000000 |
| cinder-scheduler | juju-deefc6-ha-6     | nova | enabled | down  | 2024-01-03T08:30:13.000000 |
| cinder-scheduler | cinder               | nova | enabled | up    | 2024-01-05T02:36:13.000000 |
| cinder-volume    | cinder@cinder-ceph   | nova | enabled | up    | 2024-01-05T02:36:13.000000 |
+------------------+----------------------+------+---------+-------+----------------------------+
$ openstack compute service list
+--------------------------------------+----------------+-----------------------------+----------+---------+-------+----------------------------+
| ID                                   | Binary         | Host                        | Zone     | Status  | State | Updated At                 |
+--------------------------------------+----------------+-----------------------------+----------+---------+-------+----------------------------+
| 663b8dfb-15a0-4e24-a976-04bb28e24085 | nova-scheduler | juju-deefc6-ha-15           | internal | enabled | up    | 2024-01-05T02:36:54.000000 |
| 2652f9a4-8fab-4c2c-997f-4b6566912ca2 | nova-conductor | juju-deefc6-ha-15           | internal | enabled | up    | 2024-01-05T02:36:52.000000 |
| 37764f92-6b81-48d8-ac86-029179aab813 | nova-compute   | juju-deefc6-ha-16.cloud.sts | nova     | enabled | up    | 2024-01-05T02:36:52.000000 |
+--------------------------------------+----------------+-----------------------------+----------+---------+-------+----------------------------+

# vim /etc/cinder/cinder.conf
...
[DEFAULT]
enabled_backends = cinder-ceph
[backend_defaults]
[cinder-ceph]
volume_backend_name = cinder-ceph
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = cinder-ceph
rbd_user = cinder-ceph
rbd_secret_uuid = 0ab553e9-3489-46b6-bc46-b01fb677abf2
rbd_ceph_conf = /var/lib/charm/cinder-ceph/ceph.conf
report_discard_supported = True
rbd_exclusive_cinder_pool = True
rbd_flatten_volume_from_snapshot = False

applications:
  cinder:
    num_units: 3
    constraints: mem=2G
    charm: ch:cinder
    channel: yoga/edge
    series: jammy
    options:
      debug: *debug
      verbose: *verbose
      block-device: None
      glance-api-version: 2
      openstack-origin: *openstack_origin
      ssl_ca: *ssl_ca
      ssl_cert: *ssl_cert
      ssl_key: *ssl_key
  nova-compute:
    options:
      force-raw-images: True
      libvirt-image-backend: rbd
  cinder:
    options:
      block-device:  ''
      ephemeral-unmount: ''
      overwrite: 'false'
      glance-api-version: 2
  cinder-ceph:
    charm: ch:cinder-ceph
    channel: yoga/edge
    series: jammy
relations:
  - [ glance, cinder:image-service ]
  - [ cinder:shared-db, cinder-mysql-router:shared-db ]
  - [ cinder, rabbitmq-server ]
  - [ cinder, nova-cloud-controller ]
  - [ cinder:identity-service, keystone ]
  
  - [ glance, ceph-mon ]
  - [ cinder, cinder-ceph ]
  - [ cinder-ceph:ceph, ceph-mon ]
  - [ nova-compute, cinder-ceph ]
  - [ nova-compute, ceph-mon ]

如果用cinder-volume这个alias (也是cinder charm), 需要将cinder设置enabled-services: api,scheduler, 同时会有如下关系:
- [ cinder-volume:shared-db, cinder-volume-mysql-router:shared-db ]
- [ cinder-volume, rabbitmq-server ]
- [ "cinder-volume:identity-credentials", keystone ]

2, create a ceph volume to verify cinder+ceph env

openstack volume type create --public --property volume_backend_name="cinder-ceph" ceph_type
openstack volume create --type ceph_type --size 1 ceph_test_vol1

$ juju ssh ceph-mon/0 -- sudo rados -p cinder-ceph ls |grep volume-
rbd_id.volume-ccf72901-dc47-4115-80a6-0971836e6858
$ openstack volume show ceph_test_vol1 |grep -E 'host|type'
| os-vol-host-attr:host          | cinder@cinder-ceph#cinder-ceph       |
| type                           | ceph_type                            |


3, deploy cinder-backup sub-charm

juju deploy --series focal cinder-backup --channel yoga/stable
juju add-relation cinder-backup:backup-backend cinder:backup-backend
juju add-relation cinder-backup:ceph ceph-mon:client

$ openstack volume service list
+------------------+------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                         | Zone | Status  | State | Updated At                 |
+------------------+------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | juju-deefc6-ha-6@LVM         | nova | enabled | down  | 2024-01-03T08:30:13.000000 |
| cinder-scheduler | juju-deefc6-ha-6             | nova | enabled | down  | 2024-01-05T03:02:57.000000 |
| cinder-scheduler | cinder                       | nova | enabled | up    | 2024-01-05T03:05:42.000000 |
| cinder-volume    | cinder@cinder-ceph           | nova | enabled | up    | 2024-01-05T03:05:42.000000 |
| cinder-backup    | cinder                       | nova | enabled | up    | 2024-01-05T03:05:41.000000 |
| cinder-volume    | juju-deefc6-ha-6@cinder-ceph | nova | enabled | down  | 2024-01-05T03:03:16.000000 |
| cinder-backup    | juju-deefc6-ha-6             | nova | enabled | down  | 2024-01-05T03:03:20.000000 |
| cinder-volume    | juju-deefc6-ha-5@cinder-ceph | nova | enabled | down  | 2024-01-05T03:04:12.000000 |
| cinder-backup    | juju-deefc6-ha-5             | nova | enabled | down  | 2024-01-05T03:04:11.000000 |
| cinder-volume    | juju-deefc6-ha-4@cinder-ceph | nova | enabled | down  | 2024-01-05T03:03:41.000000 |
| cinder-scheduler | juju-deefc6-ha-5             | nova | enabled | down  | 2024-01-05T03:04:13.000000 |
| cinder-backup    | juju-deefc6-ha-4             | nova | enabled | down  | 2024-01-05T03:03:44.000000 |
| cinder-scheduler | juju-deefc6-ha-4             | nova | enabled | down  | 2024-01-05T03:03:14.000000 |
+------------------+------------------------------+------+---------+-------+----------------------------+
$ openstack compute service list
+--------------------------------------+----------------+-----------------------------+----------+---------+-------+----------------------------+
| ID                                   | Binary         | Host                        | Zone     | Status  | State | Updated At                 |
+--------------------------------------+----------------+-----------------------------+----------+---------+-------+----------------------------+
| 663b8dfb-15a0-4e24-a976-04bb28e24085 | nova-scheduler | juju-deefc6-ha-15           | internal | enabled | up    | 2024-01-05T03:06:25.000000 |
| 2652f9a4-8fab-4c2c-997f-4b6566912ca2 | nova-conductor | juju-deefc6-ha-15           | internal | enabled | up    | 2024-01-05T03:06:22.000000 |
| 37764f92-6b81-48d8-ac86-029179aab813 | nova-compute   | juju-deefc6-ha-16.cloud.sts | nova     | enabled | up    | 2024-01-05T03:06:22.000000 |
+--------------------------------------+----------------+-----------------------------+----------+---------+-------+----------------------------+


4, test cinder-backup

$ openstack volume backup create --force ceph_test_vol1
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | c871e4a2-ca54-4bc3-b292-898ed8ea8672 |
| name  | None                                 |
+-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+------+-------------+-----------+------+
| ID                                   | Name | Description | Status    | Size |
+--------------------------------------+------+-------------+-----------+------+
| c871e4a2-ca54-4bc3-b292-898ed8ea8672 | None | None        | available |    1 |
+--------------------------------------+------+-------------+-----------+------+
$ juju ssh ceph-mon/0 -- sudo rados -p cinder-backup ls |grep volume
rbd_id.volume-ccf72901-dc47-4115-80a6-0971836e6858.backup.c871e4a2-ca54-4bc3-b292-898ed8ea8672
Connection to 10.5.2.178 closed.

relation output - https://paste.ubuntu.com/p/n3b6Fkr5X7/

但是上面的环境在放置周末两天之后在来创建backup时就一直处于pending状态,并且也无法删除pending状态的failed backup. 我们可以先用ceph命令来判断此时ceph是否是好的:

backup与snapshot的区别:snapshot依赖于源volume, 不能独立存在;而backup可以独立存在, 即使源volume不存在了也可以restore.
# The function cinder/backup/drivers/ceph.py#backup says if the source volume is an RBD we will attempt
# to do an incremental/differential backup, otherwise a full copy is performed.
juju ssh ceph-mon/leader
# create a volume
sudo rados lspools
sudo rbd create cinder-ceph/vol1 --size 1G
#sudo rados -p cinder-ceph ls |grep vol1
sudo rbd -p cinder-ceph ls
sudo rbd -p cinder-ceph info vol1
# take a snapshot based on volume
sudo rbd snap create cinder-ceph/vol1@snap
sudo rbd -p cinder-ceph snap ls vol1
# take incremental snapshots based on snapshots
sudo rbd export-diff cinder-ceph/vol1@snap snap1 && du -h ./snap1
# create a new backup image
sudo rbd create cinder-backup/vol1.backup.base --size 1M
# import to backup image
sudo rbd import-diff ./snap1 cinder-backup/vol1.backup.base
sudo rbd -p cinder-backup info vol1.backup.base
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

quqi99

你的鼓励就是我创造的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值