Anolis OS8.6QU1通过cephadm部署ceph17.2.0分布式块存储(五)添加osd

26 篇文章 1 订阅
10 篇文章 0 订阅

本文尝试在龙蜥Anolis OS8.6QU1上通过cephadm部署ceph17.2.0分布式块存储,实现国产化操作系统上的存储系统构建,本步骤为完成首节点mon节点和图形界面,完成其它节点的基础安装,完成添加其它主机和添加mgr节点,完成添加mgr节点后再进行的第五步操作-添加OSD。

一、检查当前系统状态

[root@ceph1 opt]# ceph -s
  cluster:
    id:     58a31a00-bf04-11ed-a192-000e1e99b662
    health: HEALTH_WARN
            clock skew detected on mon.ceph2, mon.ceph3
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 46s)
    mgr: ceph1.biobao(active, since 32h), standbys: ceph2.dojvls, ceph3.hlsirv
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

可见此时ceph2\ceph3存在时钟不同步,在相应主机上执行“chronyc sources -v”后,再行观察:

[root@ceph1 ~]# ceph -s
  cluster:
    id:     58a31a00-bf04-11ed-a192-000e1e99b662
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 8m)
    mgr: ceph3.hlsirv(active, since 8m), standbys: ceph1.biobao, ceph2.dojvls
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs: 

时钟同步问题已没有了。ceph平台各主机时钟同步非常重要,否则会出现服务异常out,集群响应慢等奇怪的问题,具体同步配置可见本系列的第一步操作。

二、对服务器硬盘数据进行重置

  1. 检查服务器硬盘情况

[root@ceph1 opt]# cat /proc/partitions 
major minor  #blocks  name

   8       32  878444544 sdc
   8       33     614400 sdc1
   8       34    1048576 sdc2
   8       35  876779520 sdc3
   8        0  878444544 sda
   8        1     614400 sda1
   8        2    1048576 sda2
   8        3  876779520 sda3
   8       16  878444544 sdb
   8       48  878444544 sdd
   8       64  878444544 sde
   8       80  878444544 sdf
   8       96  878444544 sdg
   8      160  878444544 sdk
   8      144  878444544 sdj
   8      176  878444544 sdl
   8      112  878444544 sdh
   8      128  878444544 sdi
   8      224  878444544 sdo
   8      192  878444544 sdm
   8      208  878444544 sdn
   8      240  878444544 sdp
  65        0  878444544 sdq
  65       80  878444544 sdv
  65       48  878444544 sdt
  65       16  878444544 sdr
  65       64  878444544 sdu
  65       96  878444544 sdw
  65       32  878444544 sds
 253        0   73400320 dm-0
 253        1    4194304 dm-1
 253        2  878440448 dm-2
 253        3  878440448 dm-3
 253        4  878440448 dm-4
 253        5  878440448 dm-5
 253        6  878440448 dm-6
 253        8  878440448 dm-8
 253        7  878440448 dm-7
 253        9  878440448 dm-9
 253       10  878440448 dm-10
 253       11  878440448 dm-11
 253       12  878440448 dm-12
 253       14  878440448 dm-14
 253       13  878440448 dm-13
 253       15  878440448 dm-15
 253       16  878440448 dm-16
 253       17  878440448 dm-17
 253       18  878440448 dm-18
 253       19  878440448 dm-19
 253       20  878440448 dm-20
 253       21  878440448 dm-21
 253       22    4194304 dm-22
 253       23  799182848 dm-23
 253       24   73400320 dm-24
 253       25  799182848 dm-25
 253       26  878440448 dm-26

可见本主机硬盘有sda-sdw,目前系统装在sda上,但sdc上也有3个分区,应该是有过其它系统,需进行数据清理后才能使用

  1. 清理准备作osd的硬盘上的数据,本次需清理sdb-sdw,通过编写脚本来完成

[root@ceph1 opt]# vi bat_wipe_fs
[root@ceph1 opt]# cat bat_wipe_fs 
#!/bin/bash
##__author__='daigjianbing'

for devstr in {b..w};
do
dev="/dev/sd$devstr"
wipefs -af  $dev
done
[root@ceph1 opt]# sh bat_wipe_fs 
/dev/sdb:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdc:8 个字节已擦除,位置偏移为 0x00000200 (gpt):45 46 49 20 50 41 52 54
/dev/sdc:8 个字节已擦除,位置偏移为 0xd16ffffe00 (gpt):45 46 49 20 50 41 52 54
/dev/sdc:2 个字节已擦除,位置偏移为 0x000001fe (PMBR):55 aa
/dev/sdd:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sde:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdf:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdg:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdh:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdi:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdj:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdk:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdl:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdm:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdn:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdo:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdp:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdq:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdr:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sds:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdt:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdu:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdv:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
/dev/sdw:8 个字节已擦除,位置偏移为 0x00000218 (LVM2_member):4c 56 4d 32 20 30 30 31
  1. 完成磁盘数据清理后需reboot重启服务器。需对ceph1-ceph4每台主机均进行磁盘数据清理并重启的操作。

三、批量添加磁盘为osd

  1. 通过脚本进行添加(以主机ceph1-ceph4,磁盘为sdb-sdw为例):

[root@ceph1 opt]# vi bat_add_osd.sh
[root@ceph1 opt]# cat bat_add_osd.sh 
#!/bin/bash
##__author__='daigjianbing'
for devstr in {b..w};
do
    dev="/dev/sd$devstr"
    for hostid in {1..4};
    do
        hostname="ceph$hostid"
        echo "ceph orch daemon add osd $hostname:$dev"
        ceph orch daemon add osd $hostname:$dev
    done
done
  1. 只需在ceph1上执行即可

[root@ceph1 opt]# sh bat_add_osd.sh 
ceph orch daemon add osd ceph1:/dev/sdb
ceph orch daemon add osd ceph2:/dev/sdb
Created osd(s) 0 on host 'ceph2'
ceph orch daemon add osd ceph3:/dev/sdb
Created osd(s) 1 on host 'ceph3'
ceph orch daemon add osd ceph4:/dev/sdb
ceph orch daemon add osd ceph1:/dev/sdc
ceph orch daemon add osd ceph2:/dev/sdc
Created osd(s) 2 on host 'ceph2'
ceph orch daemon add osd ceph3:/dev/sdc
...
  1. 执行后发现存在问题

[root@ceph1 ~]# ceph -s
  cluster:
    id:     58a31a00-bf04-11ed-a192-000e1e99b662
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 64m)
    mgr: ceph3.hlsirv(active, since 64m), standbys: ceph1.biobao, ceph2.dojvls
    osd: 44 osds: 44 up (since 7m), 44 in (since 8m); 1 remapped pgs
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   1.2 GiB used, 36 TiB / 36 TiB avail
    pgs:     2/6 objects misplaced (33.333%)
             1 active+clean+remapped
 
[root@ceph1 ~]# ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME       STATUS  REWEIGHT  PRI-AFF
-1         35.99640  root default                             
-3         17.99820      host ceph2                           
 0    hdd   0.81810          osd.0       up   1.00000  1.00000
 2    hdd   0.81810          osd.2       up   1.00000  1.00000
 4    hdd   0.81810          osd.4       up   1.00000  1.00000
 6    hdd   0.81810          osd.6       up   1.00000  1.00000
 8    hdd   0.81810          osd.8       up   1.00000  1.00000
10    hdd   0.81810          osd.10      up   1.00000  1.00000
12    hdd   0.81810          osd.12      up   1.00000  1.00000
14    hdd   0.81810          osd.14      up   1.00000  1.00000
16    hdd   0.81810          osd.16      up   1.00000  1.00000
18    hdd   0.81810          osd.18      up   1.00000  1.00000
20    hdd   0.81810          osd.20      up   1.00000  1.00000
22    hdd   0.81810          osd.22      up   1.00000  1.00000
24    hdd   0.81810          osd.24      up   1.00000  1.00000
26    hdd   0.81810          osd.26      up   1.00000  1.00000
28    hdd   0.81810          osd.28      up   1.00000  1.00000
30    hdd   0.81810          osd.30      up   1.00000  1.00000
32    hdd   0.81810          osd.32      up   1.00000  1.00000
34    hdd   0.81810          osd.34      up   1.00000  1.00000
36    hdd   0.81810          osd.36      up   1.00000  1.00000
38    hdd   0.81810          osd.38      up   1.00000  1.00000
40    hdd   0.81810          osd.40      up   1.00000  1.00000
42    hdd   0.81810          osd.42      up   1.00000  1.00000
-5         17.99820      host ceph3                           
 1    hdd   0.81810          osd.1       up   1.00000  1.00000
 3    hdd   0.81810          osd.3       up   1.00000  1.00000
 5    hdd   0.81810          osd.5       up   1.00000  1.00000
 7    hdd   0.81810          osd.7       up   1.00000  1.00000
 9    hdd   0.81810          osd.9       up   1.00000  1.00000
11    hdd   0.81810          osd.11      up   1.00000  1.00000
13    hdd   0.81810          osd.13      up   1.00000  1.00000
15    hdd   0.81810          osd.15      up   1.00000  1.00000
17    hdd   0.81810          osd.17      up   1.00000  1.00000
19    hdd   0.81810          osd.19      up   1.00000  1.00000
21    hdd   0.81810          osd.21      up   1.00000  1.00000
23    hdd   0.81810          osd.23      up   1.00000  1.00000
25    hdd   0.81810          osd.25      up   1.00000  1.00000
27    hdd   0.81810          osd.27      up   1.00000  1.00000
29    hdd   0.81810          osd.29      up   1.00000  1.00000
31    hdd   0.81810          osd.31      up   1.00000  1.00000
33    hdd   0.81810          osd.33      up   1.00000  1.00000
35    hdd   0.81810          osd.35      up   1.00000  1.00000
37    hdd   0.81810          osd.37      up   1.00000  1.00000
39    hdd   0.81810          osd.39      up   1.00000  1.00000
41    hdd   0.81810          osd.41      up   1.00000  1.00000
43    hdd   0.81810          osd.43      up   1.00000  1.00000
[root@ceph1 ~]# 

如上,只在ceph2\ceph3上生成了OSD,ceph1\ceph4没有生成。

四、排查问题

  1. 从上面ceph -s命令结果来看,“mgr: ceph3.hlsirv(active, since 69m), standbys: ceph1.biobao, ceph2.dojvls”,显示在重置硬盘重启主机时,mgr服务已经切换到ceph3上了。尝试对ceph3主机进行重启,将mgr主用服务切回ceph1。

  1. 修改脚本,仅对1、4主机进行osd添加

[root@ceph1 opt]# cat bat_add_osd.sh.c1c4 
#!/bin/bash
##__author__='daigjianbing'
for devstr in {b..w};
do
    dev="/dev/sd$devstr"
    for hostid in 1 4;
    do
        hostname="ceph$hostid"
        echo "ceph orch daemon add osd $hostname:$dev"
        ceph orch daemon add osd $hostname:$dev
    done
done
[root@ceph1 opt]# sh bat_add_osd.sh.c1c4 
ceph orch daemon add osd ceph1:/dev/sdb
ceph orch daemon add osd ceph4:/dev/sdb
ceph orch daemon add osd ceph1:/dev/sdc
ceph orch daemon add osd ceph4:/dev/sdc
ceph orch daemon add osd ceph1:/dev/sdd
ceph orch daemon add osd ceph4:/dev/sdd
ceph orch daemon add osd ceph1:/dev/sde
ceph orch daemon add osd ceph4:/dev/sde
ceph orch daemon add osd ceph1:/dev/sdf
  1. 脚本很快即执行完成,但ceph1\ceph4上osd仍旧并没有正常生成。这种情况在cephV17.0.0部署时并未出现。用ceph orch ps检查:

[root@ceph1 ceph]# ceph orch ps ceph1   
No daemons reported
[root@ceph1 ceph]# ceph orch ps ceph2
NAME              HOST   PORTS   STATUS    REFRESHED  AGE  MEM USE  MEM LIM  VERSION    IMAGE ID   
mgr.ceph2.dojvls  ceph2  *:8443  starting          -    -        -        -  <unknown>  <unknown>  
mon.ceph2         ceph2          starting          -    -        -    2048M  <unknown>  <unknown>  
osd.0             ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.10            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.12            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.14            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.16            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.18            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.2             ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.20            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.22            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.24            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.26            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.28            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.30            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.32            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.34            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.36            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.38            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.4             ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.40            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.42            ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.6             ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.8             ceph2          starting          -    -        -    4096M  <unknown>  <unknown>  
[root@ceph1 ceph]# ceph orch ps ceph3
NAME              HOST   PORTS   STATUS    REFRESHED  AGE  MEM USE  MEM LIM  VERSION    IMAGE ID   
mgr.ceph3.hlsirv  ceph3  *:8443  starting          -    -        -        -  <unknown>  <unknown>  
mon.ceph3         ceph3          starting          -    -        -    2048M  <unknown>  <unknown>  
osd.1             ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.11            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.13            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.15            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.17            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.19            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.21            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.23            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.25            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.27            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.29            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.3             ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.31            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.33            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.35            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.37            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.39            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.41            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.43            ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.5             ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.7             ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
osd.9             ceph3          starting          -    -        -    4096M  <unknown>  <unknown>  
[root@ceph1 ceph]# ceph orch ps ceph4
No daemons reported

发现ceph1\ceph4没有任何ceph的守护进程,这个是比较奇怪的,ceph1上至少mgr和mon服务都是运行着的,容器也正常跑着,但命令怎么查不到呢。

  1. 尝试在ceph1再分发一个mgr

[root@ceph1 ceph]# ceph orch daemon add mgr ceph1:192.168.188.1/24
Deployed mgr.ceph1.gknilc on host 'ceph1'
[root@ceph1 ceph]# ceph orch ps --daemon-type mgr                 
NAME              HOST   PORTS   STATUS    REFRESHED  AGE  MEM USE  MEM LIM  VERSION    IMAGE ID   
mgr.ceph1.gknilc  ceph1  *:8443  starting          -    -        -        -  <unknown>  <unknown>  
mgr.ceph2.dojvls  ceph2  *:8443  starting          -    -        -        -  <unknown>  <unknown>  
mgr.ceph3.hlsirv  ceph3  *:8443  starting          -    -        -        -  <unknown>  <unknown>  
[root@ceph1 ceph]# ceph -s
  cluster:
    id:     58a31a00-bf04-11ed-a192-000e1e99b662
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 66m)
    mgr: ceph1.biobao(active, since 66m), standbys: ceph3.hlsirv, ceph2.dojvls, ceph1.gknilc
    osd: 44 osds: 44 up (since 65m), 44 in (since 91m); 1 remapped pgs
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   467 MiB used, 36 TiB / 36 TiB avail
    pgs:     2/6 objects misplaced (33.333%)
             1 active+clean+remapped
 
  1. 再次尝试对ceph1添加osd

[root@ceph1 opt]# vi bat_add_osd.sh.c1
[root@ceph1 opt]# cat bat_add_osd.sh.c1  
#!/bin/bash
##__author__='daigjianbing'
for devstr in {b..w};
do
    dev="/dev/sd$devstr"
    for hostid in 1 ;
    do
        hostname="ceph$hostid"
        echo "ceph orch daemon add osd $hostname:$dev"
        ceph orch daemon add osd $hostname:$dev
    done
done
[root@ceph1 opt]# sh bat_add_osd.sh.c1c4
ceph orch daemon add osd ceph1:/dev/sdb
Created osd(s) 44 on host 'ceph1'
ceph orch daemon add osd ceph1:/dev/sdc
...
ceph orch daemon add osd ceph1:/dev/sdw
Created osd(s) 65 on host 'ceph1
  1. 同样完成ceph4主机osd的添加

[root@ceph1 ceph]# ceph orch daemon add mgr ceph4:192.168.188.4/24
Deployed mgr.ceph1.gknilc on host 'ceph1'
[root@ceph1 opt]# cat bat_add_osd.sh.c4
#!/bin/bash
##__author__='daigjianbing'
for devstr in {b..w};
do
    dev="/dev/sd$devstr"
    for hostid in 4 ;
    do
        hostname="ceph$hostid"
        echo "ceph orch daemon add osd $hostname:$dev"
        ceph orch daemon add osd $hostname:$dev
    done
done
[root@ceph1 opt]# sh bat_add_osd.sh.c4
ceph orch daemon add osd ceph4:/dev/sdb
Created osd(s) 66 on host 'ceph4'
...
ceph orch daemon add osd ceph4:/dev/sdw
Created osd(s) 87 on host 'ceph4'
  1. 检查集群状态

[root@ceph1 opt]# ceph -s
  cluster:
    id:     58a31a00-bf04-11ed-a192-000e1e99b662
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 100m)
    mgr: ceph1.biobao(active, since 101m), standbys: ceph3.hlsirv, ceph2.dojvls, ceph1.gknilc, ceph4.nnhlgm
    osd: 88 osds: 88 up (since 3m), 88 in (since 3m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   3.8 GiB used, 72 TiB / 72 TiB avail
    pgs:     1 active+clean
 
[root@ceph1 opt]# 

可见现在4台主机88块磁盘均成功添加osd,但显示mgr有5个了,:-(。

 
[root@ceph1 opt]# ceph orch ps --daemon-type mgr
NAME              HOST   PORTS   STATUS    REFRESHED  AGE  MEM USE  MEM LIM  VERSION    IMAGE ID   
mgr.ceph1.gknilc  ceph1  *:8443  starting          -    -        -        -  <unknown>  <unknown>  
mgr.ceph2.dojvls  ceph2  *:8443  starting          -    -        -        -  <unknown>  <unknown>  
mgr.ceph3.hlsirv  ceph3  *:8443  starting          -    -        -        -  <unknown>  <unknown>  
mgr.ceph4.nnhlgm  ceph4  *:8443  starting          -    -        -        -  <unknown>  <unknown> 
  1. 还是想还原成3个mgr的情况,试试把ceph1和ceph4后装的两个mgr服务卸载

[root@ceph1 opt]# ceph orch daemon rm mgr.ceph4.nnhlgm
Removed mgr.ceph4.nnhlgm from host 'ceph4'
[root@ceph1 opt]# ceph orch daemon rm mgr.ceph1.gknilc
Removed mgr.ceph1.gknilc from host 'ceph1'
[root@ceph1 opt]# ceph -s
  cluster:
    id:     58a31a00-bf04-11ed-a192-000e1e99b662
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 3h)
    mgr: ceph1.biobao(active, since 3h), standbys: ceph3.hlsirv, ceph2.dojvls
    osd: 88 osds: 88 up (since 98m), 88 in (since 98m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 577 KiB
    usage:   3.8 GiB used, 72 TiB / 72 TiB avail
    pgs:     1 active+clean
 
[root@ceph1 opt]# 
  1. 貌似可以的,那么把全部主机重启一下,再看看系统运行正常不。重启了两次,没什么问题,应该就这样可以吧,:-)。

四、登录页面检查

页面显示也运行OK,好吧,就这样吧。对于mgr显示和ceph1\ceph4加osd中的相关问题,如果各位网络达人有更清晰理解知道怎么回事的,也欢迎指教,我的QQ:14518215。至此,本节任务结束。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

代先生.重庆

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值