操作环境
openstack icehouse
ceph giant
问题描述
今日查看ceph状态的时候,提示以下信息
[root@ceph-osd-1 ~]# ceph -s
cluster 8ade9410-0ad8-4dbb-bd56-e1bf2f947009
health HEALTH_ERR 1 full osd(s); 2 near full osd(s)
monmap e1: 1 mons at {ceph-osd-1=10.10.200.163:6789/0}, election epoch 1, quorum 0 ceph-osd-1
osdmap e1514: 11 osds: 11 up, 11 in
flags full
pgmap v177918: 1628 pgs, 6 pools, 2548 GB data, 632 kobjects
7708 GB used, 4196 GB / 11905 GB avail
1625 active+clean
3 active+clean+scrubbing+deep
[root@ceph-osd-1 ~]# ceph health
HEALTH_ERR 1 full osd(s); 2 near full osd(s)
通过上述信息,可以得知在11个osds中有1个osd的空间已经使用完了,2个osds的空间已经到达阀值,按照ceph官方推荐方法,添加新的osd至ceph osd cluster中,于是在osd server中挂载新的磁盘并挂载是/osd4目录下,将该磁盘作为新的osd添加在osd cluster中。
[root@ceph-osd-1 ~]# ceph-deploy osd prepare 10.10.200.164:/osd4
[root@ceph-osd-1 ~]# ceph-deploy osd activate 10.10.200.164:/osd4
查看osd信息,此时应该有12个osds
[root@ceph-osd-2 ~]# ceph osd tree
# id weight type name up/down reweight
-1 12.56 root default
-2 5.28 host ceph-osd-1
0 0.98 osd.0 up 1
1 0.98 osd.1 up 1
2 0.98 osd.2 up 1
3 0.98 osd.3 up 1
4 1.36 osd.4 up 1
-3 3.64 host ceph-osd-2
5 0.91 osd.5 up 1
6 0.91 osd.6 up 1
7 0.91 osd.7 up 1
11 0.91 osd.11 up 1
-4 3.64 host ceph-osd-3
8 1.82 osd.8 up 1
9 0.91 osd.9 up 1
10 0.91 osd.10 up 1
此时ceph osd开始重新分布数据,经过一段时间后,查看ceph状态如下
[root@ceph-osd-1 ~]# ceph -s
cluster 8ade9410-0ad8-4dbb-bd56-e1bf2f947009
health HEALTH_OK
monmap e1: 1 mons at {ceph-osd-1=10.10.200.163:6789/0}, election epoch 1, quorum 0 ceph-osd-1
osdmap e1839: 12 osds: 12 up, 12 in
pgmap v201216: 1628 pgs, 6 pools, 2614 GB data, 648 kobjects
7913 GB used, 4922 GB / 12836 GB avail
1628 active+clean
client io 1358 B/s wr, 59 op/s
ceph cluster状态恢复正常。