ceph 维护系列(一)删除一个ceph 节点

零 修订记录

序号修订内容修订时间
1新增2021/2/19

一 摘要

本文是基于(ceph 纵向扩容 nautilus版本)[https://www.cnblogs.com/weiwei2021/p/14381416.html],对cephtest003.ceph.kxdigit.com 节点进行卸载。

二 环境信息

| 主机名 | 状态|IP |磁盘 |角色|
| ---- | ---- | ---- | ---- | ---- | ---- |—|
| cephtest001.ceph.kxdigit.com |已完成|10.3.176.10 | 系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd |ceph-deploy,monitor,mgr,mds,osd|
| cephtest002.ceph.kxdigit.com |已完成 |10.3.176.16 | 系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf |monitor,mgr,mds,osd|
| cephtest003.ceph.kxdigit.com |已完成(待删除节点)|10.3.176.44 | 系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg |monitor,mgr,mds,osd|
| cephtest004.ceph.kxdigit.com |已完成|10.3.176.36 | 系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf(待纵向扩容) |monitor,mgr,mds,osd|

三 实施

(一)ceph 集群当前状态

3.1.1 集群健康状态

可见 cephtest003 上有 mon,mgr,osd 服务。

[cephadmin@cephtest001 ~]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            1 pools have many more objects per pg than average

  services:
    mon: 4 daemons, quorum cephtest001,cephtest002,cephtest003,cephtest004 (age                                                                13d)
    mgr: cephtest001(active, since 7w), standbys: cephtest002, cephtest003, ceph                                                               test004
    osd: 19 osds: 19 up (since 11d), 19 in (since 11d)
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 24.12k objects, 167 GiB
    usage:   522 GiB used, 70 TiB / 71 TiB avail
    pgs:     400 active+clean

  io:
    client:   77 KiB/s rd, 341 B/s wr, 85 op/s rd, 0 op/s wr

3.1.2 osd 状态

可见 cephtest003 上有osd.8\osd.9\osd.10\osd.11\osd.12\osd.13 共6块osd。

[cephadmin@cephtest001 cephcluster]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       70.57448 root default
-3        3.26669     host cephtest001
 0   hdd  1.08890         osd.0            up  1.00000 1.00000
 1   hdd  1.08890         osd.1            up  1.00000 1.00000
 2   hdd  1.08890         osd.2            up  1.00000 1.00000
-5        5.45547     host cephtest002
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
 5   hdd  1.09109         osd.5            up  1.00000 1.00000
 6   hdd  1.09109         osd.6            up  1.00000 1.00000
 7   hdd  1.09109         osd.7            up  1.00000 1.00000
-7       43.66132     host cephtest003
 8   hdd  7.27689         osd.8            up  1.00000 1.00000
 9   hdd  7.27689         osd.9            up  1.00000 1.00000
10   hdd  7.27689         osd.10           up  1.00000 1.00000
11   hdd  7.27689         osd.11           up  1.00000 1.00000
12   hdd  7.27689         osd.12           up  1.00000 1.00000
13   hdd  7.27689         osd.13           up  1.00000 1.00000
-9       18.19099     host cephtest004
14   hdd  3.63820         osd.14           up  1.00000 1.00000
15   hdd  3.63820         osd.15           up  1.00000 1.00000
16   hdd  3.63820         osd.16           up  1.00000 1.00000
17   hdd  3.63820         osd.17           up  1.00000 1.00000
18   hdd  3.63820         osd.18           up  1.00000 1.00000
[cephadmin@cephtest001 cephcluster]$

(二)方法一 移除cephtest003节点上所有osd(cephtest003 机器上操作)

逐个删除该节点上所有osd,先把数据迁移走,然后再删除。
###3.2.1 调整osd的crush weight(osd.8 为例)
cephadmin 用户登录cephtest003 ,

[cephadmin@cephtest003 ~]$ ceph osd crush reweight osd.8 0.1
reweighted item id 8 name 'osd.8' to 0.1 in crush map

执行了之后,把osd.8 里 数据迁移出来较慢,并且还会迁移到该节点cephtest003 其他机器上。
现在改为另一种方法 移除cephtest003

(三)方法二 移除cephtest003节点上所有osd(cephtest003 机器上操作)

该方法 直接关闭节点osd 服务,直接删除各个osd。

3.3.1 停掉cephtest003 osd 服务

先看下当前节点osd 服务。

[cephadmin@cephtest003 ~]$ systemctl status ceph-osd.target
● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service instances at once
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd.target; enabled; vendor preset: enabled)
   Active: active since Thu 2020-12-31 09:09:07 CST; 1 months 19 days ago
[cephadmin@cephtest003 ~]$ ps -ef | grep osd
ceph       20806       1  0  2020 ?        03:43:45 /usr/bin/ceph-osd -f --cluster ceph --id 8 --setuser ceph --setgroup ceph
ceph       20809       1  0  2020 ?        04:00:41 /usr/bin/ceph-osd -f --cluster ceph --id 12 --setuser ceph --setgroup ceph
ceph       20816       1  0  2020 ?        03:45:35 /usr/bin/ceph-osd -f --cluster ceph --id 9 --setuser ceph --setgroup ceph
ceph       20819       1  0  2020 ?        04:04:24 /usr/bin/ceph-osd -f --cluster ceph --id 13 --setuser ceph --setgroup ceph
ceph       20821       1  0  2020 ?        03:28:47 /usr/bin/ceph-osd -f --cluster ceph --id 10 --setuser ceph --setgroup ceph
ceph       20824       1  0  2020 ?        05:05:17 /usr/bin/ceph-osd -f --cluster ceph --id 11 --setuser ceph --setgroup ceph
cephadm+  770980  770940  0 13:54 pts/1    00:00:00 grep --color=auto osd
[cephadmin@cephtest003 ~]$

停服务,需要root 权限

[cephadmin@cephtest003 ~]$ systemctl stop ceph-osd.target
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to manage system services or units.
Authenticating as: root
Password:
==== AUTHENTICATION COMPLETE ===

可见osd 已经down


[cephadmin@cephtest003 ~]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       63.39758 root default
-3        3.26669     host cephtest001
 0   hdd  1.08890         osd.0            up  1.00000 1.00000
 1   hdd  1.08890         osd.1            up  1.00000 1.00000
 2   hdd  1.08890         osd.2            up  1.00000 1.00000
-5        5.45547     host cephtest002
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
 5   hdd  1.09109         osd.5            up  1.00000 1.00000
 6   hdd  1.09109         osd.6            up  1.00000 1.00000
 7   hdd  1.09109         osd.7            up  1.00000 1.00000
-7       36.48442     host cephtest003
 8   hdd  0.09999         osd.8          down  1.00000 1.00000
 9   hdd  7.27689         osd.9          down  1.00000 1.00000
10   hdd  7.27689         osd.10         down  1.00000 1.00000
11   hdd  7.27689         osd.11         down  1.00000 1.00000
12   hdd  7.27689         osd.12         down  1.00000 1.00000
13   hdd  7.27689         osd.13         down  1.00000 1.00000
-9       18.19099     host cephtest004
14   hdd  3.63820         osd.14           up  1.00000 1.00000
15   hdd  3.63820         osd.15           up  1.00000 1.00000
16   hdd  3.63820         osd.16           up  1.00000 1.00000
17   hdd  3.63820         osd.17           up  1.00000 1.00000
18   hdd  3.63820         osd.18           up  1.00000 1.00000
[cephadmin@cephtest003 ~]$

3.3.2 删除该节点上所有osd

[cephadmin@cephtest003 ~]$ ceph osd rm 8
removed osd.8
[cephadmin@cephtest003 ~]$ ceph osd rm 9
removed osd.9
[cephadmin@cephtest003 ~]$ ceph osd rm 10
removed osd.10
[cephadmin@cephtest003 ~]$ ceph osd rm 11
removed osd.11
[cephadmin@cephtest003 ~]$ ceph osd rm 12
removed osd.12
[cephadmin@cephtest003 ~]$ ceph osd rm 13
removed osd.13
[cephadmin@cephtest003 ~]$

[cephadmin@cephtest003 ~]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       63.39758 root default
-3        3.26669     host cephtest001
 0   hdd  1.08890         osd.0            up  1.00000 1.00000
 1   hdd  1.08890         osd.1            up  1.00000 1.00000
 2   hdd  1.08890         osd.2            up  1.00000 1.00000
-5        5.45547     host cephtest002
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
 5   hdd  1.09109         osd.5            up  1.00000 1.00000
 6   hdd  1.09109         osd.6            up  1.00000 1.00000
 7   hdd  1.09109         osd.7            up  1.00000 1.00000
-7       36.48442     host cephtest003
 8   hdd  0.09999         osd.8           DNE        0
 9   hdd  7.27689         osd.9           DNE        0
10   hdd  7.27689         osd.10          DNE        0
11   hdd  7.27689         osd.11          DNE        0
12   hdd  7.27689         osd.12          DNE        0
13   hdd  7.27689         osd.13          DNE        0
-9       18.19099     host cephtest004
14   hdd  3.63820         osd.14           up  1.00000 1.00000
15   hdd  3.63820         osd.15           up  1.00000 1.00000
16   hdd  3.63820         osd.16           up  1.00000 1.00000
17   hdd  3.63820         osd.17           up  1.00000 1.00000
18   hdd  3.63820         osd.18           up  1.00000 1.00000
[cephadmin@cephtest003 ~]$

3.3.3 删除该节点上所有osd 的crush map

[cephadmin@cephtest003 ~]$ ceph osd crush rm osd.8
removed item id 8 name 'osd.8' from crush map
[cephadmin@cephtest003 ~]$ ceph osd crush rm osd.9
removed item id 9 name 'osd.9' from crush map
[cephadmin@cephtest003 ~]$ ceph osd crush rm osd.10
removed item id 10 name 'osd.10' from crush map
[cephadmin@cephtest003 ~]$ ceph osd crush rm osd.11
removed item id 11 name 'osd.11' from crush map
[cephadmin@cephtest003 ~]$ ceph osd crush rm osd.12
removed item id 12 name 'osd.12' from crush map
[cephadmin@cephtest003 ~]$ ceph osd crush rm osd.13
removed item id 13 name 'osd.13' from crush map
[cephadmin@cephtest003 ~]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       26.91316 root default
-3        3.26669     host cephtest001
 0   hdd  1.08890         osd.0            up  1.00000 1.00000
 1   hdd  1.08890         osd.1            up  1.00000 1.00000
 2   hdd  1.08890         osd.2            up  1.00000 1.00000
-5        5.45547     host cephtest002
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
 5   hdd  1.09109         osd.5            up  1.00000 1.00000
 6   hdd  1.09109         osd.6            up  1.00000 1.00000
 7   hdd  1.09109         osd.7            up  1.00000 1.00000
-7              0     host cephtest003
-9       18.19099     host cephtest004
14   hdd  3.63820         osd.14           up  1.00000 1.00000
15   hdd  3.63820         osd.15           up  1.00000 1.00000
16   hdd  3.63820         osd.16           up  1.00000 1.00000
17   hdd  3.63820         osd.17           up  1.00000 1.00000
18   hdd  3.63820         osd.18           up  1.00000 1.00000
[cephadmin@cephtest003 ~]$

3.3.3 删除该节点上所有osd 的认证

[cephadmin@cephtest003 ~]$ ceph auth list | grep osd.8
installed auth entries:

osd.8
[cephadmin@cephtest003 ~]$ ceph auth del osd.8
updated

[cephadmin@cephtest003 ~]$ ceph auth del osd.9
updated
[cephadmin@cephtest003 ~]$ ceph auth del osd.10
updated
[cephadmin@cephtest003 ~]$ ceph auth del osd.11
updated
[cephadmin@cephtest003 ~]$ ceph auth del osd.12
updated
[cephadmin@cephtest003 ~]$ ceph auth del osd.13
updated
[cephadmin@cephtest003 ~]$

3.3.4 ceph osd tree中删除此节点的crush map

[cephadmin@cephtest003 ~]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       26.91316 root default
-3        3.26669     host cephtest001
 0   hdd  1.08890         osd.0            up  1.00000 1.00000
 1   hdd  1.08890         osd.1            up  1.00000 1.00000
 2   hdd  1.08890         osd.2            up  1.00000 1.00000
-5        5.45547     host cephtest002
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
 5   hdd  1.09109         osd.5            up  1.00000 1.00000
 6   hdd  1.09109         osd.6            up  1.00000 1.00000
 7   hdd  1.09109         osd.7            up  1.00000 1.00000
-7              0     host cephtest003
-9       18.19099     host cephtest004
14   hdd  3.63820         osd.14           up  1.00000 1.00000
15   hdd  3.63820         osd.15           up  1.00000 1.00000
16   hdd  3.63820         osd.16           up  1.00000 1.00000
17   hdd  3.63820         osd.17           up  1.00000 1.00000
18   hdd  3.63820         osd.18           up  1.00000 1.00000
[cephadmin@cephtest003 ~]$ ceph osd crush rm  cephtest003
removed item id -7 name 'cephtest003' from crush map
[cephadmin@cephtest003 ~]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       26.91316 root default
-3        3.26669     host cephtest001
 0   hdd  1.08890         osd.0            up  1.00000 1.00000
 1   hdd  1.08890         osd.1            up  1.00000 1.00000
 2   hdd  1.08890         osd.2            up  1.00000 1.00000
-5        5.45547     host cephtest002
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
 5   hdd  1.09109         osd.5            up  1.00000 1.00000
 6   hdd  1.09109         osd.6            up  1.00000 1.00000
 7   hdd  1.09109         osd.7            up  1.00000 1.00000
-9       18.19099     host cephtest004
14   hdd  3.63820         osd.14           up  1.00000 1.00000
15   hdd  3.63820         osd.15           up  1.00000 1.00000
16   hdd  3.63820         osd.16           up  1.00000 1.00000
17   hdd  3.63820         osd.17           up  1.00000 1.00000
18   hdd  3.63820         osd.18           up  1.00000 1.00000
[cephadmin@cephtest003 ~]$

3.3.5 卸载该节点所有挂载在osd的硬盘(root 用户)

[root@cephtest003 osd]# ll -ah
total 0
drwxr-x---.  8 ceph ceph  94 Dec 25 09:29 .
drwxr-x---. 15 ceph ceph 222 Dec 24 16:49 ..
drwxrwxrwt   2 ceph ceph 180 Dec 31 09:09 ceph-10
drwxrwxrwt   2 ceph ceph 180 Dec 31 09:09 ceph-11
drwxrwxrwt   2 ceph ceph 180 Dec 31 09:09 ceph-12
drwxrwxrwt   2 ceph ceph 180 Dec 31 09:09 ceph-13
drwxrwxrwt   2 ceph ceph 180 Dec 31 09:09 ceph-8
drwxrwxrwt   2 ceph ceph 180 Dec 31 09:09 ceph-9
[root@cephtest003 osd]# cd ceph-8/
[root@cephtest003 ceph-8]# ll
total 24
lrwxrwxrwx 1 ceph ceph 93 Dec 31 09:09 block -> /dev/ceph-7db02a0b-fe54-46c9-ac16-22272491f5ab/osd-block-60527c79-6c65-4807-bb4b-93b1eafc586b
-rw------- 1 ceph ceph 37 Dec 31 09:09 ceph_fsid
-rw------- 1 ceph ceph 37 Dec 31 09:09 fsid
-rw------- 1 ceph ceph 55 Dec 31 09:09 keyring
-rw------- 1 ceph ceph  6 Dec 31 09:09 ready
-rw------- 1 ceph ceph 10 Dec 31 09:09 type
-rw------- 1 ceph ceph  2 Dec 31 09:09 whoami
[root@cephtest003 ceph-8]# cd ../
[root@cephtest003 osd]# cd ~
[root@cephtest003 ~]# umount /var/lib/ceph/osd/ceph-8
[root@cephtest003 ~]# ll /var/lib/ceph/osd/
total 0
drwxrwxrwt  2 ceph ceph 180 Dec 31 09:09 ceph-10
drwxrwxrwt  2 ceph ceph 180 Dec 31 09:09 ceph-11
drwxrwxrwt  2 ceph ceph 180 Dec 31 09:09 ceph-12
drwxrwxrwt  2 ceph ceph 180 Dec 31 09:09 ceph-13
drwxr-xr-x. 2 ceph ceph   6 Dec 25 09:28 ceph-8
drwxrwxrwt  2 ceph ceph 180 Dec 31 09:09 ceph-9
[root@cephtest003 ~]# umount /var/lib/ceph/osd/ceph-9
[root@cephtest003 ~]# umount /var/lib/ceph/osd/ceph-10
[root@cephtest003 ~]# umount /var/lib/ceph/osd/ceph-11
[root@cephtest003 ~]# umount /var/lib/ceph/osd/ceph-12
[root@cephtest003 ~]# umount /var/lib/ceph/osd/ceph-13
[root@cephtest003 ~]# cd /var/lib/ceph/osd/
[root@cephtest003 osd]# ll
total 0
drwxr-xr-x. 2 ceph ceph 6 Dec 25 09:29 ceph-10
drwxr-xr-x. 2 ceph ceph 6 Dec 25 09:29 ceph-11
drwxr-xr-x. 2 ceph ceph 6 Dec 25 09:29 ceph-12
drwxr-xr-x. 2 ceph ceph 6 Dec 25 09:29 ceph-13
drwxr-xr-x. 2 ceph ceph 6 Dec 25 09:28 ceph-8
drwxr-xr-x. 2 ceph ceph 6 Dec 25 09:28 ceph-9
[root@cephtest003 osd]# cd ceph-8/
[root@cephtest003 ceph-8]# ll
total 0
[root@cephtest003 ceph-8]#

(四) 移除该节点mon

3.4.1 当前集群状态

[root@cephtest003 ceph-8]# su - cephadmin
Last login: Fri Feb 19 13:54:40 CST 2021 on pts/1
[cephadmin@cephtest003 ~]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            1 pools have many more objects per pg than average
            Degraded data redundancy: 17889/72348 objects degraded (24.726%), 97 pgs degraded, 97 pgs undersized

  services:
    mon: 4 daemons, quorum cephtest001,cephtest002,cephtest003,cephtest004 (age 13d)
    mgr: cephtest001(active, since 7w), standbys: cephtest002, cephtest003, cephtest004
    osd: 13 osds: 13 up (since 28m), 13 in (since 11d); 97 remapped pgs
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 24.12k objects, 167 GiB
    usage:   406 GiB used, 27 TiB / 27 TiB avail
    pgs:     17889/72348 objects degraded (24.726%)
             16284/72348 objects misplaced (22.508%)
             302 active+clean
             95  active+undersized+degraded+remapped+backfill_wait
             2   active+undersized+degraded+remapped+backfilling
             1   active+remapped+backfill_wait

  io:
    client:   219 KiB/s rd, 243 op/s rd, 0 op/s wr
    recovery: 25 MiB/s, 4 objects/s

[cephadmin@cephtest003 ~]$

3.4.2 移除cephtest003 mon

[cephadmin@cephtest003 ~]$ ceph mon stat
e2: 4 mons at {cephtest001=[v2:10.3.176.10:3300/0,v1:10.3.176.10:6789/0],cephtest002=[v2:10.3.176.16:3300/0,v1:10.3.176.16:6789/0],cephtest003=[v2:10.3.176.44:3300/0,v1:10.3.176.44:6789/0],cephtest004=[v2:10.3.176.36:3300/0,v1:10.3.176.36:6789/0]}, election epoch 144, leader 0 cephtest001, quorum 0,1,2,3 cephtest001,cephtest002,cephtest003,cephtest004
[cephadmin@cephtest003 ~]$ ceph mon remove cephtest003
removing mon.cephtest003 at [v2:10.3.176.44:3300/0,v1:10.3.176.44:6789/0], there will be 3 monitors
[cephadmin@cephtest003 ~]$ ceph mon stat
e3: 3 mons at {cephtest001=[v2:10.3.176.10:3300/0,v1:10.3.176.10:6789/0],cephtest002=[v2:10.3.176.16:3300/0,v1:10.3.176.16:6789/0],cephtest004=[v2:10.3.176.36:3300/0,v1:10.3.176.36:6789/0]}, election epoch 150, leader 0 cephtest001, quorum 0,1,2 cephtest001,cephtest002,cephtest004
[cephadmin@cephtest003 ~]$

3.4.3 修改ceph.conf

修改部署节点 /home/cephadmin/cephcluster/ceph.conf 待批量推送的配置文件,删除cephtest003 相关配置。

[cephadmin@cephtest001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@cephtest001 cephcluster]$ cat ceph.conf
[global]
fsid = 6cd05235-66dd-4929-b697-1562d308d5c3
mon_initial_members = cephtest001, cephtest002, cephtest004
mon_host = 10.3.176.10,10.3.176.16,10.3.176.36
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 10.3.176.0/22
cluster network = 10.3.176.0/22

[cephadmin@cephtest001 cephcluster]$

批量推送配置文件到节点。

3.4.4 推送 修改/home/cephadmin/cephcluster/ceph.conf 到四台节点(部署节点执行)

[cephadmin@cephtest001 cephcluster]$ ceph-deploy --overwrite-conf admin cephtest001 cephtest002  cephtest004

3.4.5 修改 /etc/ceph 目录权限(所有节点执行)

[cephadmin@cephtest004 ~]$ sudo chown -R cephadmin:cephadmin /etc/ceph

(五)关闭mgr

到待删除节点上关闭mgr 服务即可。

[cephadmin@cephtest003 ~]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            1 pools have many more objects per pg than average

  services:
    mon: 3 daemons, quorum cephtest001,cephtest002,cephtest004 (age 23m)
    mgr: cephtest001(active, since 7w), standbys: cephtest002, cephtest003, cephtest004
    osd: 13 osds: 13 up (since 71m), 13 in (since 11d); 1 remapped pgs
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 24.12k objects, 167 GiB
    usage:   517 GiB used, 26 TiB / 27 TiB avail
    pgs:     28/72348 objects misplaced (0.039%)
             399 active+clean
             1   active+clean+remapped

  io:
    client:   77 KiB/s rd, 1.3 KiB/s wr, 86 op/s rd, 0 op/s wr

[cephadmin@cephtest003 ~]$ systemctl status ceph-mgr.target
● ceph-mgr.target - ceph target allowing to start/stop all ceph-mgr@.service instances at once
   Loaded: loaded (/usr/lib/systemd/system/ceph-mgr.target; enabled; vendor preset: enabled)
   Active: active since Thu 2020-12-31 09:09:07 CST; 1 months 19 days ago
[cephadmin@cephtest003 ~]$ systemctl stop ceph-mgr.target
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to manage system services or units.
Authenticating as: root
Password:
==== AUTHENTICATION COMPLETE ===

等一会再执行ceph-s

[cephadmin@cephtest003 ~]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            1 pools have many more objects per pg than average

  services:
    mon: 3 daemons, quorum cephtest001,cephtest002,cephtest004 (age 26m)
    mgr: cephtest001(active, since 7w), standbys: cephtest002, cephtest004
    osd: 13 osds: 13 up (since 75m), 13 in (since 11d); 1 remapped pgs
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 24.12k objects, 167 GiB
    usage:   517 GiB used, 26 TiB / 27 TiB avail
    pgs:     28/72348 objects misplaced (0.039%)
             399 active+clean
             1   active+clean+remapped

  io:
    client:   7.2 KiB/s rd, 27 KiB/s wr, 10 op/s rd, 7 op/s wr

[cephadmin@cephtest003 ~]$

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值