目的
ceph-15.2 版本
OSD 故障恢复
删除 osd
ceph osd safe-to-destroy osd.1
ceph osd destroy 1 --yes-i-really-mean-it
删除磁盘
检测故障硬盘
megacli -PDlist -a0 | grep -E "Slot|Error|Firmware state: Failed"
确认磁盘信息
megacli -cfgdsply -aALL | grep -v Information | grep -E "Virtual|Slot|RAID Level"
删除故障磁盘
megacli -CfgLdDel -LX -a0
恢复
组建磁盘
megacli -CfgLdAdd -r0 [0:3] ra wb direct nocachedbadbbu -a0
分区
parted /dev/sdc mklabel gpt
parted /dev/sdc1 mkpart primay 1 100%
准备磁盘
ceph-volume lvm zap /dev/sdc1
准备 OSD
ceph-volume lvm prepare --osd-id 1 --data /dev/sdc1 --journal /dev/sdj2
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7cdbf6fb-64c5-444c-8cea-49c4106a654e 1
Running command: vgcreate --force --yes ceph-6f083737-b3aa-4616-b2c2-d0495ec6b6dc /dev/sdc1
stdout: Physical volume "/dev/sdc1" successfully created.
stdout: Volu