OpenStack cinder对接Ceph存储实验全过程

Cinder是OpenStack中的一个组件,用于管理和提供块存储服务。
准备ceph安装环境:

一、实验环境介绍
ceph_node1   192.168.40.11  192.168.6.81
ceph_node2   192.168.40.12  192.168.6.82
ceph_node3   192.168.40.13  192.168.6.83

给三台虚拟机分别添加三块硬盘,磁盘大小为20G

二、给三台虚拟机分别配置IP地址,确保能访问外网

cd  /etc/sysconfig/network-scripts

vim ifcfg-ens33

YPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

NAME=ens33

DEVICE=ens33

ONBOOT=yes

IPADDR=192.168.40.11

PREFIX=24

GATEWAY=192.168.40.1

DNS1=192.168.40.1

systemctl restart network

其他两台分别设置为192.168.40.12和192.168.40.13,测试三台虚拟机之间能互相通信且能访问外网

三、设置计算机名

hostnamectl set-hostname ceph_node1

hostnamectl set-hostname ceph_node2

hostnamectl set-hostname ceph_node3

四、关闭所有节点的防火墙和SELinux

systemctl disable firewalld

systemctl stop firewalld

vim /etc/selinux/config

setenforce 0

五、在所有的节点上配置/etc/hosts文件

[root@ceph_node1 ~] cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.6.81  ceph_node1

192.168.6.82  ceph_node2

192.168.6.83  ceph_node3

六、在ceph_node1生成秘钥,并将公钥复制到各个节点,包括自己,建立ssh互信

ssh-copy-id -i /root/.ssh/id_rsa.pub ceph_node1

ssh-copy-id -i /root/.ssh/id_rsa.pub ceph_node2

ssh-copy-id -i /root/.ssh/id_rsa.pub ceph_node3

七、配置时间服务器

1. 在ceph_node1上安装chronyd,其他节点指向该服务器

vim /etc/chronyd.conf

 Allow NTP client access from local network.

allow 192.168.31.0/24     允许的网段

 Serve time even if not synchronized to a time source.

local stratum 10

systemctl enable chronyd

systemctl restart chronyd

2. 在ceph_node2和ceph_node3上指向ceph_node1

vim /etc/chrony.conf

server 192.168.31.11  iburst

server 1.centos.pool.ntp.org iburst

server 2.centos.pool.ntp.org iburst

server 3.centos.pool.ntp.org iburst

systemctl restart chronyd

八、给三个节点配置yum源

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

cat << EOF | tee /etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for $basearch

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/\$basearch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

priority=1

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

priority=1

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/SRPMS

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

EOF

yum clean all

yum list all

mon osd

部署ceph集群

一、在ceph_node1上安装ceph-deploy

yum -y install ceph-deploy 

二、在ceph_node1上创建一个cluster目录,所有命令在此目录下进行(文件位置和名字可以随意)

mkdir /cluster

三、在ceph_node1上执行部署,创建一个ceph集群

cd /cluster

ceph-deploy new ceph_node1 ceph_node2 ceph_node3

输出没有报错,则表示部署成功,否则会看到报错信息,根据报错信息进行排错

四、在所有节点上安装ceph包

yum -y install ceph

ceph -v

ceph version 14.2.15 (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)

五、在ceph_node1节点上部署角色

1.生成mon角色

ceph-deploy mon create-initial  

2.将当前目录下面的ceph.client.admin.keyring推送到远程主机的/etc/ceph目录下

ceph-deploy admin ceph_node1 ceph_node2 ceph_node3

3. 安装mgr,提供web界面管理ceph,此步可选

ceph-deploy mgr create ceph_node1 ceph_node2 ceph_node3 

4.在ceph_node1节点上安装rados网关

yum -y install ceph-radosgw

ceph-deploy rgw create ceph_node1

5.安装mds,给文件服务器提供元数据服务

ceph-deploy mds create ceph_node1 ceph_node2 ceph_node3

6.初始化OSD磁盘

ceph-deploy osd create --data /dev/sdb ceph_node1

ceph-deploy osd create --data /dev/sdc ceph_node1

ceph-deploy osd create --data /dev/sdd ceph_node1

ceph-deploy osd create --data /dev/sdb ceph_node2

ceph-deploy osd create --data /dev/sdc ceph_node2

ceph-deploy osd create --data /dev/sdd ceph_node2

ceph-deploy osd create --data /dev/sdb ceph_node3

ceph-deploy osd create --data /dev/sdc ceph_node3

ceph-deploy osd create --data /dev/sdd ceph_node3

6.查看ceph集群状态

ceph -s

ceph osd tree

ceph osd df

管理ceph集群

1. 创建存储池

ceph osd pool create mypool1 128

2. 查看已创建存储池信息

ceph osd pool ls detail

3. 查看副本数及pg_num

ceph osd pool get mypool1 size

ceph osd pool get mypool1 pg_num

ceph osd pool set mypool1 pg_num 120

ceph osd pool set mypool1 pgp_num 120

4. 创建EC池,创建EC策略

ceph osd erasure-code-profile set ec001 k=3 m=2 crush-failure-domain=osd

ceph osd pool create mypool2 128 erasure ec001

ceph osd pool ls detail

5. 设置mypool2为rgw

ceph osd pool application enable mypool2 rgw

6. 在客户端上传文件到mypool2,并测试下载

rados -p mypool2 put  userinfo /etc/passwd

rados -p mypool1 ls

rados -p mypool2 stat userinfo

rados -p mypool2 get userinfo  /root/passwd

7. 设置mypool3为rbd,并创建卷,映射给业务服务器使用

ceph osd pool create mypool3 128

ceph osd pool application enable mypool3 rbd

rbd create mypool3/disk01 --size 1G

rbd info mypool3/disk01

rbd map mypool3/disk01 会报错,根据提示执行即可

rbd feature disable mypool2/disk01 object-map fast-diff deep-flatten

rbd map mypool3/disk01

ll /dev/rbd0

mkfs.ext4 /dev/rbd0

mkdir -p /data/rbd0

mount  /dev/rbd0  /data/rbd0/

blkid

vim /etc/fstab

/dev/rbd0  /data   ext4  defaults,_netdev 0 0

mount -a

设置开机自动映射

vim /etc/ceph/rbdmap

mypool3/disk01     id=admin,keyring=/etc/ceph/ceph.client.admin.keyring

systemctl enable rbdmap.service

systemctl restart rbdmap.service

8. 授权

ceph auth get-or-create client.alice mon 'allow r' osd 'allow rw pool=mypool2' -o /etc/ceph/ceph.client.alice.keyring

ceph auth get client.alice

rados -p mypool2 put group /etc/group --id alice

rados -p mypool2 --id alice ls      

删除 ceph pool

1. 在所有的 monitor 节点上修改配置/etc/ceph/ceph.conf

mon_allow_pool_delete = true

systemctl restart ceph-mon.target

ceph osd pool rm mypool1 mypool1 --yes-i-really-really-mean-it

cinder对接 ceph 实验

一、网路配置:在openstack的三个节点各添加一块网卡,连接外网安装软件包

二、在所有openstack节点安装ceph驱动,即ceph-common相关的软件包

[root@ceph_node1 ~]cd /etc/yum.repos.d/

[root@ceph_node1 ~]scp ceph.repo epel.repo root@192.168.6.11:$PWD

[root@ceph_node1 ~]scp ceph.repo epel.repo root@192.168.6.21:$PWD

[root@ceph_node1 ~]scp ceph.repo epel.repo root@192.168.6.22:$PWD

[root@controller ~] yum install ceph-common 缺少依赖包

[root@ceph_node1 ~]yum provides "*/librabbitmq.so.4"

[root@ceph_node1 ~]yum reinstall --downloadonly --downloaddir=/rpm librabbitmq

[root@ceph_node1 ~] yum provides "*/liblz4.so.1"

[root@ceph_node1 ~] yum update --downloadonly --downloaddir=/rpm lz4

[root@ceph_node1 ~] scp /rpm/* root@192.168.6.11:/root

[root@ceph_node1 ~] scp /rpm/* root@192.168.6.21:/root

[root@ceph_node1 ~]scp /rpm/* root@192.168.6.22:/root

继续在openstack节点安装ceph-common相关的软件包,控制节点和计算节点都要安装

[root@controller ~] rpm -ivh librabbitmq-0.8.0-3.el7.x86_64.rpm

[root@controller ~] rpm -ivh lz4-1.8.3-1.el7.x86_64.rpm

[root@controller ~] yum install  ceph-common  安装成功

三、发送配置文件和密钥文件到openstack节点

[root@ceph_node1 ~] scp /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring  root@192.168.6.11:/etc/ceph/

[root@ceph_node1 ~] scp /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring  root@192.168.6.21:/etc/ceph/

[root@ceph_node1 ~] scp /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring  root@192.168.6.22:/etc/ceph/

四、在openstack节点访问ceph集群

五、创建rbd类型的存储池,以及认证的用户

ceph osd pool create volumes 128

ceph osd pool application enable volumes rbd

ceph auth get-or-create client.volumes mon 'profile rbd' osd 'profile rbd

pool=volumes' -o /etc/ceph/ceph.client.volumes.keyring

scp /etc/ceph/ceph.client.volumes.keyring root@192.168.6.11:/etc/ceph

scp /etc/ceph/ceph.client.volumes.keyring root@192.168.6.21:/etc/ceph

scp /etc/ceph/ceph.client.volumes.keyring root@192.168.6.22:/etc/ceph

六、生成UUID,openstack节点与ceph之间做密钥安全认证

uuidgen > /root/uuid.txt  

七、在控制节点编辑cinder的配置文件

[root@controller ~] vim /etc/cinder/cinder.conf

enabled_backends=lvm, glusterfs, ceph

[ceph]

volume_driver=cinder.volume.drivers.rbd.RBDDriver

rbd_ceph_conf=/etc/ceph/ceph.conf

rbd_pool=volumes

rbd_user=volumes

rbd_secret_uuid=fed13d88-3484-4c07-bf19-a36d6e77525c

rbd_max_clone_depth=5

rbd_store_chunk=4

rbd_flatten_volume_from_snapshot=false

volume_backend_name=ceph

rados_connect_timeout=-1

[root@controller ~] systemctl restart openstack-cinder-api.service

[root@controller ~] systemctl restart openstack-cinder-volume.service

[root@controller ~] systemctl restart openstack-cinder-scheduler.service

[root@controller ~] source keystonerc_admin

[root@controller ~(keystone_admin)] cinder type-create ceph

[root@controller ~(keystone_admin)] cinder type-key ceph set volume_backend_name=ceph

控制节点和在计算节点执行一遍,否则无法挂载,用于openstack节点的kvm底层的libvirt与ceph之间做密钥安全认证

KVM虚拟机访问ceph集群需要账户认证,因此在这里,需要给libvirt设置账户信息

vim /root/ceph.xml

<secret ephemeral='no' private='no'>

<uuid>fed13d88-3484-4c07-bf19-a36d6e77525c</uuid>

<usage type="ceph">

<name>client.volumes secret</name>

</usage>

</secret>

ceph auth get-key client.volumes > mykey.txt

scp /root/ceph.xml /root/mykey.txt root@compute01:/root

scp /root/ceph.xml /root/mykey.txt root@compute02:/root

使用XML配置文件创建secret

virsh secret-define --file=/root/ceph.xml

设置secret,添加账户的密钥

virsh secret-set-value --secret fed13d88-3484-4c07-bf19-a36d6e77525c --base64 $(cat /root/mykey.txt)

创建卷,并挂载给云主机

source keystonerc_user1

cinder create --volume-type ceph --display-name disk1  1

rbd --id volumes -p volumes ls

volume-35714b93-15ce-44ea-b25e-e3db1b41eff3

rbd --id volumes -p volumes ls

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值