ceph日常维护

查看集群状态
[root@ceph-admin ~]# ceph health detail
HEALTH_OK

[root@ceph-admin ~]# ceph -s
cluster:
id: 523b24a7-9a44-4f04-b98b-c59c1b02a43d
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-admin (age 23h)
mgr: ceph-admin(active, since 7m)
osd: 3 osds: 3 up (since 23h), 3 in (since 3d)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 1.6 TiB / 1.6 TiB avail
pgs:

登陆到monitor节点,执行如下命令
[root@ceph-admin ~]# systemctl list-unit-files|grep enabled|grep ceph
ceph-crash.service enabled
ceph-mgr@.service enabled
ceph-mon@.service enabled
ceph-osd@.service enabled
ceph-mds.target enabled
ceph-mgr.target enabled
ceph-mon.target enabled
ceph-osd.target enabled
ceph-radosgw.target enabled
ceph.target enabled

systemctl list-units --type=service|grep ceph
[root@ceph-admin ~]# systemctl list-units --type=service|grep ceph
ceph-crash.service loaded active running Ceph crash dump collector
ceph-mgr@ceph-admin.service loaded active running Ceph cluster manager daemon
● ceph-mon@0.service loaded failed failed Ceph cluster monitor daemon
● ceph-mon@\x2a.service loaded failed failed Ceph cluster monitor daemon
ceph-mon@ceph-admin.service loaded active running Ceph cluster monitor daemon
● ceph-mon@ip-10-101-200-168.service loaded failed failed Ceph cluster monitor daemon
● ceph-osd@0.service loaded failed failed Ceph object storage daemon osd.0

mon异常维护
systemctl status ceph-mon@ceph-admin.service

root@ceph-admin ~]# systemctl status ceph-mon@ceph-admin.service
● ceph-mon@ceph-admin.service - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2020-10-25 22:47:34 CST; 1s ago
Main PID: 26118 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-admin.service
└─26118 /usr/bin/ceph-mon -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph

Oct 25 22:47:34 ceph-admin systemd[1]: Started Ceph cluster monitor daemon.
[root@ceph-admin ~]# systemctl status ceph-mon@ceph-admin.service
● ceph-mon@ceph-admin.service - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2020-10-25 22:47:34 CST; 36s ago
Main PID: 26118 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-admin.service
└─26118 /usr/bin/ceph-mon -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph

Oct 25 22:47:34 ceph-admin systemd[1]: Started Ceph cluster monitor daemon.

[root@ceph-admin ~]# systemctl list-units --type=service|grep ceph
ceph-crash.service loaded active running Ceph crash dump collector
ceph-mgr@ceph-admin.service loaded active running Ceph cluster manager daemon
ceph-mon@0.service loaded activating auto-restart Ceph cluster monitor daemon
ceph-mon@\x2a.service loaded activating auto-restart Ceph cluster monitor daemon
ceph-mon@ip-10-101-200-168.service loaded active running Ceph cluster monitor daemon
● ceph-osd@0.service loaded failed failed Ceph object storage daemon osd.0

osd异常维护
systemctl status ceph-osd@0.service
systemctl status ceph-osd@1.service
systemctl status ceph-osd@2.service

systemctl start ceph-osd@0.service
[root@node01 ~]# systemctl start ceph-osd@0.service
[root@node01 ~]# systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon osd.0
Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled)
Active: active (running) since Sat 2020-10-24 22:36:34 CST; 23h ago
Process: 33296 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 33301 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
└─33301 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

Oct 24 22:36:34 node01 systemd[1]: Starting Ceph object storage daemon osd.0…
Oct 24 22:36:34 node01 systemd[1]: Started Ceph object storage daemon osd.0.
Oct 24 22:36:34 node01 ceph-osd[33301]: 2020-10-24 22:36:34.750 7fd643920a80 -1 Falling back to public interface
Oct 24 22:36:36 node01 ceph-osd[33301]: 2020-10-24 22:36:36.085 7fd643920a80 -1 osd.0 58 log_to_monitors {default=true}
Oct 25 03:26:01 node01 ceph-osd[33301]: 2020-10-25 03:26:01.433 7fd6397b9700 -1 received signal: Hangup from killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse rados…70474) UID: 0
Oct 25 03:26:01 node01 ceph-osd[33301]: 2020-10-25 03:26:01.449 7fd6397b9700 -1 received signal: Hangup from pkill -1 -x ceph-mon|ceph-mgr|ceph-mds|ceph-osd|ceph-fuse|radosgw…70475) UID: 0
Hint: Some lines were ellipsized, use -l to show in full.
[root@node01 ~]#

查看OSD 与HOST 的归属关系
[root@ceph-admin ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.63170 root default
-3 0.54390 host node01
0 hdd 0.54390 osd.0 up 1.00000 1.00000
-5 0.54390 host node02
1 hdd 0.54390 osd.1 up 1.00000 1.00000
-7 0.54390 host node03
2 hdd 0.54390 osd.2 up 1.00000 1.00000

删除pool
ceph osd pool rm k8s k8s --yes-i-really-really-mean-it
ceph osd pool rm rbd-test rbd-test --yes-i-really-really-mean-it

ceph mgr -h

rados lspools

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值