目的
模拟 ceph (luminous 版) data disk 故障
修复上述问题
环境
参考当前 ceph 环境
ceph -s
cluster:
id: c45b752d-5d4d-4d3a-a3b2-04e73eff4ccd
health: HEALTH_OK
services:
mon: 3 daemons, quorum hh-ceph-128040,hh-ceph-128214,hh-ceph-128215
mgr: openstack(active)
osd: 36 osds: 36 up, 36 in
data:
pools: 1 pools, 2048 pgs
objects: 28024 objects, 109 GB
usage: 331 GB used, 196 TB / 196 TB avail
pgs: 2048 active+clean
osd tree (取部分)
[root@hh-ceph-128214 ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 216.00000 root default
-10 72.00000 rack racka07
-3 72.00000 host hh-ceph-128214
12 hdd 6.00000 osd.12 up 1.00000 1.00000
13 hdd 6.00000 osd.13 up 1.00000 1.00000
14 hdd 6.00000 osd.14 up 1.00000 1.00000
15 hdd 6.00000 osd.15 up 1.00000 1.00000
16 hdd 6.00000 osd.16 up 1.00000 1.00000
17 hdd 6.00000 osd.17 up 1.00000 1.00000
18 hdd 6.00000 osd.18 up 1.00000 1.00000
19 hdd 6.00000 osd.19 up 1.00000 1.00000
20 hdd 6.00000 osd.20 up 1.00000 1.00000
21 hdd 6.00000 osd.21 up 1.00000 1.00000
22 hdd 6.00000 osd.22 up 1.00000 1.00000
23 hdd 6.00000 osd.23 up 1.00000 1.00000
-9 72.00000 rack racka12
-2 72.00000 host hh-ceph-128040
0 hdd 6.00000 osd.0 up 1.00000 0.50000
1 hdd 6.00000 osd.1 up 1.00000 1.00000
2 hdd 6.00000 osd.2 up 1.00000 1.00000
3 hdd 6.00000 osd.3 up 1.00000 1.00000
故障模拟
[root@hh-ceph-128214 ceph]# df -h | grep ceph-14
/dev/sdc1 5.5T 8.8G 5.5T 1% /var/lib/ceph/osd/ceph-14
/dev/sdn3 4.7G 2.1G 2.7G 44% /var/lib/ceph/journal/ceph-14
[root@hh-ceph-128214 ceph]# rm -rf /var/lib/ceph/osd/ceph-14/*
[root@hh-ceph-128214 ceph]# ls /var/lib/ceph/osd/ceph-14/
查询当前状态
cluster:
id: c45b752d-5d4d-4d3a-a3b2-04e73eff4ccd
health: HEALTH_WARN
1 osds down
Degraded data redundancy: 3246/121608 objects degraded (2.669%), 124 pgs unclean, 155 pgs degraded
services:
mon: 3 daemons, quorum hh-ceph-128040,hh-ceph-128214,hh-ceph-128215
mgr: openstack(active)
osd: 36 osds: 35 up, 36 in
data:
pools: 1 pools, 2048 pgs
objects: 40536 objects, 157 GB
usage: 493 GB used, 195 TB / 196 TB avail
pgs: 3246/121608 objects degraded (2.669%)
1893 active+clean
155 active+undersized+degraded
io:
client: 132 kB/s rd, 177 MB/s wr, 165 op/s rd, 175 op/s wr
参考 osd tree
[root@hh-ceph-128214 ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 216.00000 root default
-10 72.00000 rack racka07
-3 72.00000 host hh-ceph-128214
12 hdd 6.00000 osd.12 up 1.00000 1.00000
13 hdd