ceph集群安装(块设备)3PC(3mon+3osd+3mgr)+1ceph-deploy机器

本文档详细阐述了如何部署和管理Ceph存储集群,包括创建sudo用户,禁用SELinux,设置时间同步,配置yum源,创建部署配置,清理旧的Ceph,安装Ceph软件,创建和管理监控节点,配置OSD,调整CRUSH映射,创建和管理池,以及客户端的安装和使用。整个过程涵盖了从基础环境准备到复杂集群操作的多个步骤。
摘要由CSDN通过智能技术生成

ceph节点操作:
创建sudo用户,加入公钥,允许部署节点访问。
disable /etc/selinux/config
setenforce 0
yum install yum-plugin-priorities -y
时间同步:
chronyc makestep
部署节点操作:
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM

md5-3efbe24a1057a87acb1e0a39134b9779

cat << EOM >>/etc/hosts
192.168.200.237 ceph-1
192.168.200.238 ceph-2
192.168.200.239 ceph-3
EOM

md5-3efbe24a1057a87acb1e0a39134b9779

创建部署配置文件等使用的目录,所有操作在该目录下执行:
mkdir my-cluster
cd my-cluster

md5-3efbe24a1057a87acb1e0a39134b9779

清理旧的ceph:
ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*

md5-08e97920284d1ea0729ce10639a57f02

ceph-deploy --username fungaming new ceph-1 ceph-2 ceph-3

md5-3efbe24a1057a87acb1e0a39134b9779

vi ceph.conf
public network = 192.168.200.0/24
osd pool default pg num = 128
osd pool default pgp num = 128
[mon]
mon allow pool delete = true

md5-3efbe24a1057a87acb1e0a39134b9779

export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-mimic/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
安装ceph
ceph-deploy --username fungaming install ceph-1 ceph-2 ceph-3
安装mon
ceph-deploy --username fungaming mon create ceph-1 ceph-2 ceph-3
ceph-deploy --username fungaming mon create-initial
同步配置
ceph-deploy --username fungaming admin ceph-1 ceph-2 ceph-3
安装mgr
ceph-deploy --username fungaming mgr ceph-1 ceph-2 ceph-3
检查ceph osd使用的磁盘
ceph-deploy disk list ceph-1 ceph-2 ceph-3
擦净磁盘
ceph-deploy disk zap ceph-1 /dev/sdb
安装osd
ceph-deploy --username fungaming osd create --data /dev/sdb ceph-1
ceph-deploy --username fungaming osd create --data /dev/sdb ceph-2
ceph-deploy --username fungaming osd create --data /dev/sdb ceph-3
ceph-deploy --username fungaming osd create --data /dev/sdc ceph-1
ceph-deploy --username fungaming osd create --data /dev/sdc ceph-2
ceph-deploy --username fungaming osd create --data /dev/sdc ceph-3

#安装完成。

md5-45eceb139399beebae44dfa61d03e2e9

根据官网计算方法http://ceph.com/pgcalc/
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
osd:6 relications: 3 pool: 2
pg=100, 接近128 , 所以pg_num 设为128

md5-3efbe24a1057a87acb1e0a39134b9779

重新推送配置:
ceph-deploy --username fungaming --overwrite-conf config push ceph-1 ceph-2 ceph-3
推送后ceph节点重启操作:
systemctl restart ceph-mon.target

md5-bb56810970953244af44fbe6aad74720

ceph health detail
ceph -s
检查mon法定人数状态
ceph quorum_status --format json-pretty

md5-4d1e61805c7501e885237cabe2353724

创建root,host
ceph osd crush add-bucket root-nvme root
ceph osd crush add-bucket root-ssd root
ceph osd crush add-bucket host1-nvme host
ceph osd crush add-bucket host2-nvme host
ceph osd crush add-bucket host3-nvme host
ceph osd crush add-bucket host1-ssd host
ceph osd crush add-bucket host2-ssd host
ceph osd crush add-bucket host3-ssd host
把host加入root
ceph osd crush move host1-ssd root=root-ssd
ceph osd crush move host2-ssd root=root-ssd
ceph osd crush move host3-ssd root=root-ssd
ceph osd crush move host3-nvme root=root-nvme
ceph osd crush move host2-nvme root=root-nvme
ceph osd crush move host1-nvme root=root-nvme
把osd加入host
ceph osd crush move osd.0 host=host1-nvme
ceph osd crush move osd.1 host=host2-nvme
ceph osd crush move osd.2 host=host3-nvme
ceph osd crush move osd.3 host=host1-ssd
ceph osd crush move osd.4 host=host2-ssd
ceph osd crush move osd.5 host=host3-ssd

md5-3efbe24a1057a87acb1e0a39134b9779

导出CRUSH map:
ceph osd getcrushmap -o crushmap.txt
crushtool -d crushmap.txt -o crushmap-decompile

md5-3efbe24a1057a87acb1e0a39134b9779

修改规则vi crushmap-decompile:

rules

rule nvme {
id 1
type replicated
min_size 1
max_size 10
step take root-nvme
step chooseleaf firstn 0 type host
step emit
}

rule ssd {
id 2
type replicated
min_size 1
max_size 10
step take root-ssd
step chooseleaf firstn 0 type host
step emit
}

md5-3efbe24a1057a87acb1e0a39134b9779

导入CRUSH map:
crushtool -c crushmap-decompile -o crushmap-compiled
ceph osd setcrushmap -i crushmap-compiled

md5-3efbe24a1057a87acb1e0a39134b9779

配置ceph.conf,让OSD启动不更新crushmap
[osd]
osd crush update on start = false

md5-9d8e4c7d8f5ece0532aaeed81ad586d1

###手动修改class标签
#查看当前集群布局
ceph osd tree
#查看crush class
ceph osd crush class ls
#删除osd.0,osd.1,osd.2的class
for i in 0 1 2;do ceph osd crush rm-device-class osd.KaTeX parse error: Expected 'EOF', got '#' at position 8: i;done #̲设置osd.0,osd.1,o…i;done
#创建一个优先使用nvme设备的crush规则
ceph osd crush rule create-replicated rule-auto-nvme default host nvme
#创建一个优先使用ssd设备的crush规则
ceph osd crush rule create-replicated rule-auto-ssd default host ssd
#查看集群的rule
ceph osd crush rule ls

md5-29b4d7ae93fad73933d7a981ab7ac830

#创建
ceph osd pool create pool-ssd 128
#创建并设置规则:
ceph osd pool create pool-ssd 128 128 rule-auto-ssd

设置pool的类型

ceph osd pool application enable pool-ssd rbd

md5-2c3e5569b6b7a81b60cf6bb3445c63f5

#查看与设置pg num
ceph osd pool get hdd_pool pg_num
ceph osd pool get hdd_pool pgp_num
ceph osd pool set rbd pg_num 256
ceph osd pool set rbd pgp_num 256
#ods与pool状态
ceph osd tree
rados df
ceph df
ceph osd lspools

#查看crush_rule及pool的rule
ceph osd crush rule ls
ceph osd pool get pool-ssd crush_rule
#查看pool详情
ceph osd pool ls detail

md5-9eccebae6357321ac792dddc7f01bfd0

rados -p pool-ssd ls
echo “hahah” >test.txt
rados -p pool-ssd put test test.txt
rados -p pool-ssd ls
#查看该对象的osd组
ceph osd map pool-ssd test
#删除
rados rm -p pool-ssd test

md5-900ff76fea595ea98fe5abdd9d11fc00

#删除pool
ceph osd pool delete pool-ssd pool-ssd --yes-i-really-really-mean-it
#设置 pool的crush_rule
ceph osd pool set pool-ssd crush_rule rule-ssd

md5-de58c22f7109a8dd4ee9a5bd7635f55b

export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-mimic/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
ceph-deploy install ceph-client
ceph-deploy admin ceph-client

md5-4e8112f57074799244f94ce6c464bd83

#创建一个块设备image
rbd create pool-ssd/foo --size 1024 --image-feature layering
#把 image 映射为块设备
rbd map pool-ssd/foo --name client.admin

mkfs.xfs /dev/rbd0
mount /dev/rbd0 /mnt/rbdtest/

md5-32dae1cfad34885200ab4c2dc63c824b

#ceph客户端操作
镜像信息
rbd ls pool-ssd
rbd info pool-ssd/foo
查看已映射块设备
rbd showmapped
取消块设备映射
rbd unmap /dev/rbd0
删除块设备映像
rbd rm pool-ssd/foo

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值