部署ceph集群的方式有很多种,本篇文章我们使用ceph-deploy工具进行ceph集群的部署,使用较久,成熟稳定,被很多自动化工具所集成,可用于生产部署

集群规划
系统版本:CentOS 7.9.2009
内核参数:3.10.0-1160.45.1.el7.x86_64
ceph版本:13.2.10/mimic (stable)

主机规划:
192.168.81.159 deph-deploy
192.168.81.158 ceph-mgr01
192.168.81.157 ceph-mgr02
192.168.81.156 ceph-mon01 ceph-mds01
192.168.81.155 ceph-mon02 ceph-mds02
192.168.81.154 ceph-mon03 ceph-mds03
192.168.81.153 ceph-node04
192.168.81.152 ceph-node03
192.168.81.151 ceph-node02
192.168.81.150 ceph-node01
数据节点磁盘规划:/dev/vdb /dev/dc /dev/vdd #各20G

系统初始化:
    1、时间同步
2、关闭防火墙
3、关闭selinux
4、修改主机名,添加hosts解析

配置yum源
#rpm -ivh https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm
#yum install -y epel-release

ceph部署

1、创建用户官网推荐使用普通用户部署和运行​​ ​​ceph​​集群,普通用户只要能以非交互方式执行命令执行一些特权命令即可。

#seradd cephadmin && echo cephadmin:1qaz2wsx | chpasswd
#echo "cephadmin ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

2、配置ssh互信登录ceph-deploy在部署ceph集群时需要以非交互的方式使用cephadmin登录至各个节点进行自动化部署工作 在ceph-deploy主机上,切换至cephadmin用户,执行ssh-keygen,生成秘钥对 执行ssh-copy-id cephadmin@$IP,分发公钥 3、安装部署工具

#yum install ceph-deploy ceph-common -y          #只在部署节点安装
#yum install ceph-common -y #所有节点

4、初始化集群信息在管理节点初始化mon节点

#mkdir ceph-cluster && cd ceph-cluster

命令可生成集群基本信息,之后再将ceph-mon02和ceph-mon03加入集群

ceph-deploy new --cluster-network 192.168.81.0/24 --public-network 192.168.80.0/24 ceph-mon01

5、初始化ceph-node节点此命令会逐台在对用的节点安装epel源和ceph源并安装ceph所需的相关组件

ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node01 ceph-node02 ceph-node03 ceph-node04

6、配置mon节点、生成秘钥在各个mon节点安装ceph-mon,使用ceph-deploy初始化mon节点

#yum install ceph-mon -y
#ceph-deploy mon create-initial
在ceph-mon01上验证:
#ps -ef |grep ceph-mon

7、推送admin秘钥至node节点和管理节点在ceph-deploy节点把配置文件和admin密钥拷贝至Ceph集群需要执行ceph管理命令的节点,从而不需要后期通过ceph命令对ceph集群进行管理配置的时候每次都需要指定ceph-mon节点地址和ceph.client.admin.keyring文件,另外各ceph-mon节点也需要同步ceph的集群配置文件与认证文件

#yum install ceph-common -y #在ceph-deploy节点和ceph-node节点安装
#ceph-deploy admin ceph-node01 ceph-node02 ceph-node03 ceph-node04 ceph-deploy 推送配置文件和admin秘钥,ceph-deploy节点执行

配置秘钥文件权限

#setfacl -m u:cephadmin:rw /etc/ceph/ceph.client.admin.keyring  # ceph-node和ceph-deploy节点执行

8、部署ceph-mgr节点

#yum install ceph-mgr -y   # ceph-mgr节点执行
#ceph-deploy mgr create ceph-mgr01 #ceph-deploy节点执行

9、ceph-deploy节点查看集群状态

#ceph -s
#ceph health detail
#ceph config set mon auth_allow_insecure_global_id_reclaim false #禁用非安全通讯模式

10、部署OSD节点列出node节点磁盘

#ceph-deploy disk list ceph-node

擦除node节点磁盘数据

#ceph-deploy  disk zap ceph-node01  /dev/vdb
#ceph-deploy disk zap ceph-node01 /dev/vdc
#ceph-deploy disk zap ceph-node01 /dev/vdd
#ceph-deploy disk zap ceph-node02 /dev/vdb
#ceph-deploy disk zap ceph-node02 /dev/vdc
#ceph-deploy disk zap ceph-node02 /dev/vdd
#ceph-deploy disk zap ceph-node03 /dev/vdb
#ceph-deploy disk zap ceph-node03 /dev/vdc
#ceph-deploy disk zap ceph-node03 /dev/vdd
#ceph-deploy disk zap ceph-node04 /dev/vdb
#ceph-deploy disk zap ceph-node04 /dev/vdc
#ceph-deploy disk zap ceph-node04 /dev/vdd

添加OSD

#ceph-deploy osd create ceph-node01 --data /dev/vdb
#ceph-deploy osd create ceph-node01 --data /dev/vdc
#ceph-deploy osd create ceph-node01 --data /dev/vdd
#ceph-deploy osd create ceph-node02 --data /dev/vdb
#ceph-deploy osd create ceph-node02 --data /dev/vdc
#ceph-deploy osd create ceph-node02 --data /dev/vdd
#ceph-deploy osd create ceph-node03 --data /dev/vdb
#ceph-deploy osd create ceph-node03 --data /dev/vdc
#ceph-deploy osd create ceph-node03 --data /dev/vdd
#ceph-deploy osd create ceph-node04 --data /dev/vdb
#ceph-deploy osd create ceph-node04 --data /dev/vdc
#ceph-deploy osd create ceph-node04 --data /dev/vdd

11、设置OSD服务自启动

在ceph-node01上测试,其他node节点操作一样
root@ceph-node01:<sub># ps -ef|grep osd
ceph 15521 1 0 00:08 ? 00:00:03 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
ceph 17199 1 0 00:09 ? 00:00:03 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph
ceph 18874 1 0 00:09 ? 00:00:03 /usr/bin/ceph-osd -f --cluster ceph --id 5 --setuser ceph --setgroup ceph
root@ceph-node02:</sub># systemctl enable ceph-osd@3 ceph-osd@4 ceph-osd@5

扩展集群高可用

1、扩展ceph-mon

#yum install ceph-mon -y   mon节点执行
#ceph-deploy mon add ceph-mon02 ceph-mon03 管理节点执行

验证:集群信息

#ceph -s
ceph-mon状态信息
#ceph quorum_status --format json-pretty

2、扩展mgr节点

#yum install ceph-mgr -y   mgr节点执行
#ceph-deploy mgr add ceph-mgr02 管理节点执行

验证:

ceph -s

部署cephfs

1、安装ceph-mds软件包要使用cephFS,需要部署mds服务,由于服务器资源有限,我将mds服务和mon部署在一起 在所有的mon节点安装ceph-mds

#yum install ceph-mds -y

2、将mds服务加入集群

#ceph-deploy mds create ceph-mds01
#ceph-deploy mds create ceph-mds02
#ceph-deploy mds create ceph-mds03

3、创建CephFS metadata 和data 存储池

创建cephfs-metadata
#ceph osd pool create cephfs-metadata 32 32
创建cephfs-data
#ceph osd pool create cephfs-data 64 64
创建一个叫mycephfs的cephFS
#ceph fs new mycephfs cephfs-metadata cephfs-data

4、查看

#ceph fs ls
#ceph fs status mycephfs
#ceph mds stat

部署RadosGW

将ceph-mgr01、ceph-mgr02 服务器部署为高可用的radosGW 服务

1、ceph-mgr01和ceph-mgr02上安装ceph-radosgw
#yum install radosgw -y
2、在ceph-deploy上初始化radosGW 服务
#ceph-deploy rgw create ceph-mgr01
#ceph-deploy rgw create ceph-mgr02
3、查看radowsgw状态
#ps -ef|grep radosgw
#ceph -s

4、测试

curl http://192.168.81.158:7480
curl http://192.168.81.157:7480

MDS的高可用及优化

需要实现高性能及数据备份,假设启动4个MDS 进程,设置2 个Rank。这时候有2 个MDS 进程会分配给两个Rank,还剩下2 个MDS进程分别作为另外个的备份。为了测试,我们把ceph-mgr02复用为mds

在ceph-mgr02上安装ceph-mds
root@ceph-mgr02:<sub># yum install ceph-mds -y
在ceph-deploy上把mgr02添加进mds
cephadmin@ceph-deploy:</sub>/ceph-cluster$ ceph-deploy mds create ceph-mgr02
ceph fs status 查看cephfs集群状态
ceph fs get mycephfs 查看文件系统状态

目前有四个mds 服务器,但是有一个主三个备,可以优化一下部署架构,设置为为两主两备。一般企业会把所有的mds设置为活动状态,不需要备份。置同时活跃的主mds最大值为2

ceph fs set mycephfs max_mds 2

目前的状态是ceph-mds01 和ceph-mds02 分别是active 状态,ceph-mgr02和ceph-mds03 分别处于standby 状态,现在可以将ceph-mds03设置为ceph-mds01 的standby,将ceph-mgr02 设置为ceph-mds02的standby,以实现每个主都有一个固定备份角色的结构修改配置文件如:

[mds.ceph-mds01]
mds_standby_for_name = ceph-mds03
mds_standby_replay = true
[mds.ceph-mds02]
mds_standby_for_name = ceph-mgr02
mds_standby_replay = true

分发配置文件保证各mds 服务重启有效

$ceph-deploy --overwrite-conf config push ceph-mon03
$ceph-deploy --overwrite-conf config push ceph-mon02
$ceph-deploy --overwrite-conf config push ceph-mon01
$ceph-deploy --overwrite-conf config push ceph-mgr02

重启mds节点,先重启active节点,再重启standby节点

$ systemctl restart ceph-mds@ceph-mds01.service
$ systemctl restart ceph-mds@ceph-mgr02.service
$ systemctl restart ceph-mds@ceph-mds02.service
$ systemctl restart ceph-mds@ceph-mds03.service

验证:

[cephadmin@ceph-deploy ceph-cluster]$ ceph fs status
mycephfs - 0 clients
========
+------+--------+------------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+------------+---------------+-------+-------+
| 0 | active | ceph-mds01 | Reqs: 0 /s | 14 | 13 |
| 1 | active | ceph-mds02 | Reqs: 0 /s | 10 | 13 |
+------+--------+------------+---------------+-------+-------+
+-----------------+----------+-------+-------+
| Pool | type | used | avail |
+-----------------+----------+-------+-------+
| cephfs-metadata | metadata | 21.0k | 31.8G |
| cephfs-data | data | 0 | 31.8G |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
| ceph-mgr02 |
| ceph-mds03 |
+-------------+
MDS version: ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[cephadmin@ceph-deploy ceph-cluster]$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name mycephfs
epoch 17861
flags 12
created 2021-12-03 18:36:45.977768
modified 2021-12-07 16:52:53.484983
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
min_compat_client -1 (unspecified)
last_failure 0
last_failure_osd_epoch 211
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 2
in 0,1
up {0=45342,1=24109}
failed
damaged
stopped
data_pools [4]
metadata_pool 3
inline_data disabled
balancer
standby_count_wanted 1
45342: 192.168.80.156:6800/1702341976 'ceph-mds01' mds.0.17850 up:active seq 13 (standby for rank 0 'ceph-mds03')
24109: 192.168.80.155:6800/1604771179 'ceph-mds02' mds.1.17857 up:active seq 37 (standby for rank 1 'ceph-mgr02')