集群安装方式:
1: ceph-deploy 方式安装ceph集群,模拟osd磁盘损坏;
分别采用如下两种方式修复:
1:使用ceph-deploy 方式修复故障osd;
2:手动修复故障osd;
#######使用ceph-deploy方式修复过程演示########
1:停止osd
/etc/init.d/ceph stop osd.3
2:查看osd磁盘挂载情况;
[root@node243 ceph]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 2G 0 part [SWAP]
└─sda3 8:3 0 47.5G 0 part /
sdb 8:16 0 100G 0 disk
├─sdb1 8:17 0 95G 0 part /var/lib/ceph/tmp/mnt.x4MbgI
└─sdb2 8:18 0 5G 0 part /var/lib/ceph/osd/ceph-3
sr0 11:0 1 1024M 0 rom
3:卸载挂载分区
umount /var/lib/ceph/osd/ceph-3
umount /var/lib/ceph/tmp/mnt.x4MbgI
4:格式化磁盘模拟磁盘损坏
mkfs.xfs -f /dev/sdb
5:查看集群osd 状态
ceph osd tree
[root@node243 ceph]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6 0 host node01
-1 0.44998 root default
-2 0.09000 host ceph-deploy
0 0.09000 osd.0 up 1.00000 1.00000
-3 0.09000 host node241
1 0.09000 osd.1 up 1.00000 1.00000
-4 0.09000 host node242
2 0.09000 osd.2 up 1.00000 1.00000
-5 0.09000 host node243
3 0.09000 osd.3 down 1.00000 1.00000 《==发现osd状态down
-7 0.09000 host node245
5 0.09000 osd.5 up 1.00000 1.00000
6:将osd状态设置为out
ceph osd out osd.3
7:将osd从集群中删除
ceph osd rm osd.3
8:从CRUSH中移除
ceph osd crush rm osd.3
<