glance、cinder、nova对接ceph存储准备工作

本文详细描述了如何在OpenStack环境中使用glance、cinder和nova与ceph集群进行对接,包括在ceph主节点创建存储池、初始化、创建用户权限以及在OpenStack节点上设置密钥和配置文件的过程。
摘要由CSDN通过智能技术生成

本地虚拟机练习glance、cinder、nova对接ceph集群后端存储的准备工作


前言

所需资源

完整搭建的openstack平台
完整部署的ceph集群
ceph-14.2.22.tar.gz


一、ceph 主节点操作

示例:pandas 是基于NumPy 的一种工具,该工具是为了解决数据分析任务而创建的。

  1. 创建存储池(images、volumes、vms)
[root@ceph-node1 ~]# ceph osd pool create vms 32
pool 'vms' created
[root@ceph-node1 ~]# ceph osd pool create images 32
pool 'images' created
[root@ceph-node1 ~]# ceph osd pool create volumes 32
pool 'volumes' created
  1. 初始化
[root@ceph-node1 ~]# rbd pool init vms
[root@ceph-node1 ~]# rbd pool init images
[root@ceph-node1 ~]# rbd pool init volumes

3.创建用户,并为客户端赋予权限、创建密钥并发送

tee用于将执行命令的输出结果写入知道文件中,下面的命令是直接将生成的密钥写道controller和compute的文件中,不用我们再去手写了

[root@ceph-node1 ~]# ceph auth get-or-create client.glance mon 'allow *' osd 'allow *' mgr 'allow *' | ssh 192.168.25.100 tee /etc/ceph/ceph.client.glance.keyring
The authenticity of host `192.168.25.100 (192.168.25.100)' can't be established.
ECDSA key fingerprint is SHA256:nNfbIvQ4JnzKAG0MHAegS0723/jJht3xVf0Nt+/rjBc.
ECDSA key fingerprint is MD5:7c:24:7a:dd:1e:55:e9:fa:c8:65:bb:86:b5:b6:70:73.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.25.100' (ECDSA) to the list of known hosts.
[client.glance]
        key = AQCvk89lesKpHxAAcaj7HJZHQYdmKXK2dlBkQA==
[root@ceph-node1 ceph]# ceph auth get-or-create client.cinder mon 'allow *' osd 'allow *' mgr 'allow *' |ssh 192.168.25.100 tee /etc/ceph/ceph.client.cinder.keyring      
root@192.168.25.100`s password: 
[client.cinder]
        key = AQA2lc9lXbSBNxAAwS0z5X5T+iqJ0QJ12le/MA==
[root@ceph-node1 ceph]# ceph auth get-or-create client.cinder mon 'allow *' osd 'allow *' mgr 'allow *' |ssh 192.168.25.200 tee /etc/ceph/ceph.client.cinder.keyring 
The authenticity of host `192.168.25.200 (192.168.25.200)` can`t be established.
ECDSA key fingerprint is SHA256:nNfbIvQ4JnzKAG0MHAegS0723/jJht3xVf0Nt+/rjBc.
ECDSA key fingerprint is MD5:7c:24:7a:dd:1e:55:e9:fa:c8:65:bb:86:b5:b6:70:73.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.25.200' (ECDSA) to the list of known hosts.
root@192.168.25.200's password: 
[client.cinder]
        key = AQA2lc9lXbSBNxAAwS0z5X5T+iqJ0QJ12le/MA==

4.发送主配置文件

[root@ceph-node1 ceph]# scp ceph.conf 192.168.25.100:/etc/ceph/
root@192.168.25.100`s password: 
ceph.conf                                                                                                                                                 100%  202   224.6KB/s   00:00    
[root@ceph-node1 ceph]# scp ceph.conf 192.168.25.200:/etc/ceph/
root@192.168.25.200's password: 
ceph.conf                                                                                                                                                 100%  202   290.0KB/s   00:00    
[root@ceph-node1 ceph]# 

二、OpenStack节点操作

  1. 计算节点设置密钥
## 设置一个变量接收存储,万一忘了输出一下即可
[root@compute ceph]# ID=$(uuidgen)
[root@compute ceph]# echo $ID
674e1fe8-23f9-4ada-a2f9-2aac2f01e69c
## 这样也可以直接写入uuid
[root@compute ceph]# cat >> secret.xml << EOF
> <secret ephemeral='no' private='no'>
>  <uuid>$ID</uuid>
>  <usage type='ceph'>
>   <name>client.cinder secret</name>
>  </usage>
> </secret>
> EOF
[root@compute ceph]# virsh secret-define --file secret.xml 
Secret 674e1fe8-23f9-4ada-a2f9-2aac2f01e69c created
[root@compute ceph]# virsh secret-set-value --secret ${ID} --base64 $(cat ceph.client.cinder.keyring |grep key |awk -F ' ' '{print $3}')  
Secret value set
## 验证
[root@compute ceph]# virsh secret-list
 UUID                                  Usage
--------------------------------------------------------------------------------
 674e1fe8-23f9-4ada-a2f9-2aac2f01e69c  ceph client.cinder secret

  1. 上传ceph-14.2.22.tar.gz至双节点,并添加yum源
[root@controller ceph]# ls /root
anaconda-ks.cfg  ceph-14.2.22.tar.gz
[root@compute ceph]# ls /root
anaconda-ks.cfg  ceph-14.2.22.tar.gz

[root@controller ceph]# tar -zxvf /root/ceph-14.2.22.tar.gz -C /opt
 ......
[root@controller ceph]# ls /opt
centos  ceph  iaas
[root@controller ceph]# cat /etc/yum.repos.d/httpd.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=file:///opt/iaas/iaas-repo
gpgcheck=0
enabled=1
[ceph]
name=ceph
baseurl=file:///opt/ceph
gpgcheck=0
enabled=1

## 计算节点
[root@compute ceph]#  cat /etc/yum.repos.d/httpd.repo
[centos]
name=centos
baseurl=ftp://controller/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=ftp://controller/iaas/iaas-repo
gpgcheck=0
enabled=1
[ceph]
name=ceph
baseurl=ftp://controller/ceph
gpgcheck=0
enabled=1
## 没报错就是成功
[root@compute ceph]# yum clean all && yum makecache
Loaded plugins: fastestmirror
Cleaning repos: centos ceph iaas
Cleaning up list of fastest mirrors
Loaded plugins: fastestmirror
Determining fastest mirrors
centos                                                                                                                                                               | 3.6 kB  00:00:00     
ceph                                                                                                                                                                 | 2.9 kB  00:00:00     
iaas                                                                                                                                                                 | 2.9 kB  00:00:00     
(1/10): centos/group_gz                                                                                                                                              | 153 kB  00:00:00     
(2/10): centos/primary_db                                                                                                                                            | 3.3 MB  00:00:00     
(3/10): centos/filelists_db                                                                                                                                          | 3.3 MB  00:00:00     
(4/10): centos/other_db                                                                                                                                              | 1.3 MB  00:00:00     
(5/10): ceph/filelists_db                                                                                                                                            | 183 kB  00:00:00     
(6/10): ceph/primary_db                                                                                                                                              | 265 kB  00:00:00     
(7/10): ceph/other_db                                                                                                                                                | 132 kB  00:00:00     
(8/10): iaas/filelists_db                                                                                                                                            | 768 kB  00:00:00     
(9/10): iaas/primary_db                                                                                                                                              | 597 kB  00:00:00     
(10/10): iaas/other_db                                                                                                                                               | 306 kB  00:00:00     
Metadata Cache Created
  1. 双节点安装ceph-common、python-rbd
[root@compute ceph]# yum install -y ceph-common python-rbd
[root@controller ceph]# yum install -y ceph-common python-rbd
  1. 双节点更改文件用户组
## 计算节点
[root@compute ceph]# ll
total 16
-rw-r--r-- 1 cinder cinder  64 Feb 17 01:04 ceph.client.cinder.keyring
-rw-r--r-- 1 root   root   202 Feb 17 01:11 ceph.conf
-rw-r--r-- 1 root   root    92 Jun 30  2021 rbdmap
-rw-r--r-- 1 root   root   165 Feb 17 01:19 secret.xml
[root@compute ceph]# chown -R cinder:cinder ceph.client.cinder.keyring
## 控制节点
[root@controller ceph]# ll
total 16
-rw-r--r-- 1 root root  64 Feb 17 01:02 ceph.client.cinder.keyring
-rw-r--r-- 1 root root  64 Feb 17 00:58 ceph.client.glance.keyring
-rw-r--r-- 1 root root 202 Feb 17 01:11 ceph.conf
-rw-r--r-- 1 root root  92 Jun 30  2021 rbdmap
[root@controller ceph]# chown -R glance:glance ceph.client.glance.keyring 
[root@controller ceph]# chown -R cinder:cinder ceph.client.cinder.keyring 

结束

接下来进入openstack节点操作

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值