1、ceph -s 查看集群状态
[root@admin-node ~]# ceph -s
cluster 99f00338-a334-4f90-a579-496a934f25c0
health HEALTH_WARN
109 pgs degraded
9 pgs recovering
96 pgs recovery_wait
109 pgs stuck unclean
recovery 38892/105044 objects degraded (37.024%)
monmap e1: 1 mons at {admin-node=192.168.13.171:6789/0}
election epoch 3, quorum 0 admin-node
osdmap e146: 20 osds: 20 up, 20 in
flags sortbitwise
pgmap v6669: 320 pgs, 3 pools, 205 GB data, 52522 objects
424 GB used, 74059 GB / 74484 GB avail
38892/105044 objects degraded (37.024%)
211 active+clean
96 active+recovery_wait+degraded
9 active+recovering+degraded
4 active+degraded
recovery io 95113 kB/s, 23 objects/s
client io 215 MB/s rd, 53 op/s rd, 0 op/s wr
2、ceph health查看集群状态
[root@admin-node ~]# ceph health
HEALTH_WARN 104 pgs degraded; 7 pgs recovering; 93 pgs recovery_wait; 104 pgs stuck unclean; recovery 36306/105044 objects degraded (34.563%)
3、ceph osd tree 检查OSD的CRUSH map
[root@admin-node ~]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 20.00000 root default
-2 10.00000 host node2
0 1.00000 osd.0 up 1.00000 1.00000
1 1.00000 osd.1 up 1.00000 1.00000
2 1.00000 osd.2 up 1.00000 1.00000
3 1.00000 osd.3 up 1.00000 1.00000
4 1.00000 osd.4 up 1.00000 1.00000