查看集群的状态
# ceph -s
cluster:
id: 646270d4-ff81-4196-aabe-a78325f49be7
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 105m)
mgr: a(active, since 37m)
osd: 3 osds: 3 up (since 23h), 3 in (since 23h)
rgw: 1 daemon active (dataphin.a)
data:
pools: 9 pools, 200 pgs
objects: 7.04k objects, 21 GiB
usage: 94 GiB used, 1.4 TiB / 1.5 TiB avail
pgs: 200 active+clean
health: HEALTH_OK 这个状态是正常的
如果状态不是正常的话,可以使用如下命令,查看报错的原因
ceph health detail
查看pg的状态
# ceph pg stat
200 pgs: 200 active+clean; 21 GiB data, 64 GiB used, 1.4 TiB / 1.5 TiB avail
查看MON节点的状态
# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid 646270d4-ff81-4196-aabe-a78325f49be7
last_changed 2021-11-23 04:02:55.351979
created 2021-11-23 04:02:30.728295
min_mon_release 14 (nautilus)
0: [v2:10.103.252.87:3300/0,v1:10.103.252.87:6789/0] mon.a
1: [v2:10.103.230.65:3300/0,v1:10.103.230.65:6789/0] mon.b
2: [v2:10.102.72.31:3300/0,v1:10.102.72.31:6789/0] mon.c
查看osd的相关命令
ceph osd tree
# 查看osd容量
# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.48729 1.00000 499 GiB 31 GiB 21 GiB 120 KiB 10 GiB 468 GiB 6.31 1.00 200 up
2 hdd 0.48729 1.00000 499 GiB 31 GiB 21 GiB 85 KiB 10 GiB 468 GiB 6.31 1.00 200 up
1 hdd 0.48729 1.00000 499 GiB 31 GiB 21 GiB 120 KiB 10 GiB 468 GiB 6.31 1.00 200 up
TOTAL 1.5 TiB 94 GiB 64 GiB 327 KiB 30 GiB 1.4 TiB 6.31
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
# 查看osd池信息
ceph osd lspool
或者
ceph df
# 查看池的属性
ceph osd dump|grep rbd
# 设置池删除保护属性
# ceph osd pool get dxy nodelete
nodelete: false
# ceph osd pool set dxy nodelete 1
set pool 9 nodelete to 1
# ceph osd pool delete dxy dxy --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must unset nodelete flag for the pool first
# 设置osd