ceph nautilus 横向扩容

零 修订记录

序号修订内容修订时间
1新增2021/1/29

一 摘要

前文linux 基于三台物理机安装ceph nautilus 介绍了 ceph 在centos 7.6 上的安装。本文介绍ceph集群的横向扩容。

二 环境信息

(一)物理机信息

2.1.1 机器信息

品牌物理机配置
DELL R730Intel® Xeon® CPU E5-2650 v4 @ 2.20GHz2 内存:128G 16G8 硬盘:2TSAS 2 4T4

2.3.2机器raid 方式

系统盘用raid1,数据盘若raid 卡支持单盘raid0,就做单盘raid0,不支持 就是单盘volumn,不要把几张数据盘放在一起做raid0.

(二)部署规划

主机名状态IP磁盘角色
cephtest001.ceph.kxdigit.com已完成10.3.176.10系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sddceph-deploy,monitor,mgr,mds,osd
cephtest002.ceph.kxdigit.com已完成10.3.176.16系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdfmonitor,mgr,mds,osd
cephtest003.ceph.kxdigit.com已完成10.3.176.44系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdgmonitor,mgr,mds,osd
cephtest004.ceph.kxdigit.com本次扩容10.3.176.36系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sdemonitor,mgr,mds,osd

三 实施

(一)基础安装

基础安装主要涉及到dns、yum、防火墙、selinux、时间服务器、设置机器名称,创建cephadmin用户,并从部署节点设置免密登录。
这次扩容 我并没有修改网卡名称为eth0,另外本次还是只用了一张网卡。

3.1.1 dns、yum、防火墙、selinux 时间服务器

这些我都通过ansible 脚本批量实现,在此不熬述了。

[dev@10-3-170-32 base]$ ansible-playbook modifydns.yml

[dev@10-3-170-32 base]$ ansible-playbook updateyum.yml

[dev@10-3-170-32 base]$ ansible-playbook closefirewalldandselinux.yml

[dev@10-3-170-32 base]$ ansible-playbook updatecephyum.yml

[dev@10-3-170-32 base]$ ansible-playbook modifychronyclient.yml

3.1.2 配置域名或者hosts 文件

若有dns 系统,请务必在内网dns 系统里配置域名
cephtest004.ceph.kxdigit.com 10.3.176.36
若有请批量修改 四台ceph 机器hosts 文件

3.1.3 修改机器名称



[dev@10-3-170-32 base]$ ssh root@10.3.176.36
Last login: Mon Feb  1 09:20:08 2021 from 10.3.170.32
[root@localhost ~]#  hostnamectl set-hostname cephtest004.ceph.kxdigit.com
[root@localhost ~]# exit
登出
Connection to 10.3.176.36 closed.
[dev@10-3-170-32 base]$ ssh root@10.3.176.36
Last login: Mon Feb  1 09:24:17 2021 from 10.3.170.32
[root@cephtest004 ~]#

3.1.4 创建部署用户cephadmin

[root@cephtest004 ~]# useradd cephadmin
[root@cephtest004 ~]# echo "cephnau@2020" | passwd --stdin cephadmin
更改用户 cephadmin 的密码 。
passwd:所有的身份验证令牌已经成功更新。
[root@cephtest004 ~]# echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
cephadmin ALL = (root) NOPASSWD:ALL
[root@cephtest004 ~]# chmod 0440 /etc/sudoers.d/cephadmin
[root@cephtest004 ~]#

3.1.5 配置cephadmin 用户免密登录

从部署节点上免密登录到 新增扩容节点
记住是cephadmin 用户免密登录

[cephadmin@cephtest001 ~]$ ssh-copy-id cephadmin@cephtest004.ceph.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@cephtest004.ceph.kxdigit.com's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@cephtest004.ceph.kxdigit.com'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@cephtest001 ~]$

至此 基础安装已完成。

(二)部署

In production environments, data backfilling usually does not start immediately after new nodes join the ceph cluster, which can affect cluster performance.So we need to set some flags to accomplish this.
生成环境,扩容ceph集群时,需要先关闭 data backfilling,等系统不忙时再开启。

3.2.1 关闭data backfilling

3.2.1.1 检查整个ceph 集群状态
[cephadmin@cephtest001 ~]$ ceph  -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            1 pools have many more objects per pg than average
            Degraded data redundancy: 169/67158 objects degraded (0.252%), 7 pgs degraded, 15 pgs undersized
            15 pgs not deep-scrubbed in time
            15 pgs not scrubbed in time

  services:
    mon: 3 daemons, quorum cephtest001,cephtest002,cephtest003 (age 4w)
    mgr: cephtest001(active, since 5w), standbys: cephtest002, cephtest003
    osd: 14 osds: 14 up (since 4w), 14 in (since 4w)

  data:
    pools:   4 pools, 272 pgs
    objects: 22.39k objects, 161 GiB
    usage:   497 GiB used, 52 TiB / 52 TiB avail
    pgs:     169/67158 objects degraded (0.252%)
             257 active+clean
             8   active+undersized
             7   active+undersized+degraded

  io:
    client:   507 B/s rd, 6.2 KiB/s wr, 1 op/s rd, 1 op/s wr

[cephadmin@cephtest001 ~]$


[cephadmin@cephtest001 ~]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       52.38348 root default
-3        3.26669     host cephtest001
 0   hdd  1.08890         osd.0            up  1.00000 1.00000
 1   hdd  1.08890         osd.1            up  1.00000 1.00000
 2   hdd  1.08890         osd.2            up  1.00000 1.00000
-5        5.45547     host cephtest002
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
 5   hdd  1.09109         osd.5            up  1.00000 1.00000
 6   hdd  1.09109         osd.6            up  1.00000 1.00000
 7   hdd  1.09109         osd.7            up  1.00000 1.00000
-7       43.66132     host cephtest003
 8   hdd  7.27689         osd.8            up  1.00000 1.00000
 9   hdd  7.27689         osd.9            up  1.00000 1.00000
10   hdd  7.27689         osd.10           up  1.00000 1.00000
11   hdd  7.27689         osd.11           up  1.00000 1.00000
12   hdd  7.27689         osd.12           up  1.00000 1.00000
13   hdd  7.27689         osd.13           up  1.00000 1.00000
[cephadmin@cephtest001 ~]$


[cephadmin@cephtest001 ~]$ ceph health
HEALTH_WARN 1 pools have many more objects per pg than average; Degraded data redundancy: 169/67158 objects degraded (0.252%), 7 pgs degraded, 15 pgs undersized; 15 pgs not deep-scrubbed in time; 15 pgs not scrubbed in time
[cephadmin@cephtest001 ~]$

3.2.1.2 关闭 data backfilling
[cephadmin@cephtest001 ~]$ ceph osd set noin
noin is set
[cephadmin@cephtest001 ~]$ ceph osd set nobackfill
nobackfill is set
[cephadmin@cephtest001 ~]$


[cephadmin@cephtest001 ~]$ ceph  -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            1 pools have many more objects per pg than average
            Degraded data redundancy: 169/67158 objects degraded (0.252%), 7 pgs degraded, 15 pgs undersized
            15 pgs not deep-scrubbed in time
            15 pgs not scrubbed in time

  services:
    mon: 3 daemons, quorum cephtest001,cephtest002,cephtest003 (age 4w)
    mgr: cephtest001(active, since 5w), standbys: cephtest002, cephtest003
    osd: 14 osds: 14 up (since 4w), 14 in (since 4w)
         flags noin,nobackfill

  data:
    pools:   4 pools, 272 pgs
    objects: 22.39k objects, 161 GiB
    usage:   497 GiB used, 52 TiB / 52 TiB avail
    pgs:     169/67158 objects degraded (0.252%)
             257 active+clean
             8   active+undersized
             7   active+undersized+degraded

  io:
    client:   0 B/s rd, 2.2 KiB/s wr, 0 op/s rd, 0 op/s wr

[cephadmin@cephtest001 ~]$

检查下多了
osd: 14 osds: 14 up (since 4w), 14 in (since 4w)
flags noin,nobackfill

3.2.1 新增扩容节点安装ceph

[cephadmin@cephtest004 ~]$ sudo yum -y install ceph ceph-radosgw^C
[cephadmin@cephtest004 ~]$ rpm -qa |grep ceph
libcephfs2-14.2.15-0.el7.x86_64
ceph-mds-14.2.15-0.el7.x86_64
python-ceph-argparse-14.2.15-0.el7.x86_64
ceph-selinux-14.2.15-0.el7.x86_64
ceph-radosgw-14.2.15-0.el7.x86_64
ceph-mon-14.2.15-0.el7.x86_64
python-cephfs-14.2.15-0.el7.x86_64
ceph-common-14.2.15-0.el7.x86_64
ceph-osd-14.2.15-0.el7.x86_64
ceph-mgr-14.2.15-0.el7.x86_64
ceph-base-14.2.15-0.el7.x86_64
ceph-14.2.15-0.el7.x86_64
[cephadmin@cephtest004 ~]$

检查安装的版本

[cephadmin@cephtest004 ~]$ ceph -v are
ceph version 14.2.15 (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)
[cephadmin@cephtest004 ~]$

3.2.3 查看下当前集群状态

可见有3个节点

[cephadmin@cephtest001 ~]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            1 pools have many more objects per pg than average
            Degraded data redundancy: 169/67158 objects degraded (0.252%), 7 pgs degraded, 15 pgs undersized
            15 pgs not deep-scrubbed in time
            15 pgs not scrubbed in time

  services:
    mon: 3 daemons, quorum cephtest001,cephtest002,cephtest003 (age 4w)
    mgr: cephtest001(active, since 5w), standbys: cephtest002, cephtest003
    osd: 14 osds: 14 up (since 4w), 14 in (since 4w)
         flags noin,nobackfill

  data:
    pools:   4 pools, 272 pgs
    objects: 22.39k objects, 161 GiB
    usage:   497 GiB used, 52 TiB / 52 TiB avail
    pgs:     169/67158 objects degraded (0.252%)
             257 active+clean
             8   active+undersized
             7   active+undersized+degraded

  io:
    client:   0 B/s rd, 3.3 KiB/s wr, 0 op/s rd, 0 op/s wr

[cephadmin@cephtest001 ~]$


[cephadmin@cephtest001 ~]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       52.38348 root default
-3        3.26669     host cephtest001
 0   hdd  1.08890         osd.0            up  1.00000 1.00000
 1   hdd  1.08890         osd.1            up  1.00000 1.00000
 2   hdd  1.08890         osd.2            up  1.00000 1.00000
-5        5.45547     host cephtest002
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
 5   hdd  1.09109         osd.5            up  1.00000 1.00000
 6   hdd  1.09109         osd.6            up  1.00000 1.00000
 7   hdd  1.09109         osd.7            up  1.00000 1.00000
-7       43.66132     host cephtest003
 8   hdd  7.27689         osd.8            up  1.00000 1.00000
 9   hdd  7.27689         osd.9            up  1.00000 1.00000
10   hdd  7.27689         osd.10           up  1.00000 1.00000
11   hdd  7.27689         osd.11           up  1.00000 1.00000
12   hdd  7.27689         osd.12           up  1.00000 1.00000
13   hdd  7.27689         osd.13           up  1.00000 1.00000
[cephadmin@cephtest001 ~]$

3.2.4 新节点监视器添加到现有集群(部署节点执行)

[cephadmin@cephtest001 cephcluster]$ ceph-deploy --overwrite-conf mon add cephtest004 --address 10.3.176.36

检查


[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            1 pools have many more objects per pg than average
            Degraded data redundancy: 169/67158 objects degraded (0.252%), 7 pgs degraded, 15 pgs undersized
            15 pgs not deep-scrubbed in time
            15 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum cephtest001,cephtest002,cephtest003,cephtest004 (age 78s)
    mgr: cephtest001(active, since 5w), standbys: cephtest002, cephtest003
    osd: 14 osds: 14 up (since 4w), 14 in (since 4w)
         flags noin,nobackfill

  data:
    pools:   4 pools, 272 pgs
    objects: 22.39k objects, 161 GiB
    usage:   497 GiB used, 52 TiB / 52 TiB avail
    pgs:     169/67158 objects degraded (0.252%)
             257 active+clean
             8   active+undersized
             7   active+undersized+degraded

  io:
    client:   53 KiB/s rd, 59 op/s rd, 0 op/s wr

[cephadmin@cephtest001 cephcluster]$

mon: 4 daemons, quorum cephtest001,cephtest002,cephtest003,cephtest004 (age 78s)

cephtest004 已加入。

3.2.5 新节点扩展rgw(部署节点执行)

[cephadmin@cephtest001 cephcluster]$  ceph-deploy --overwrite-conf rgw create cephtest004

3.2.6 新节点扩展mgr(部署节点执行)

[cephadmin@cephtest001 cephcluster]$ ceph-deploy --overwrite-conf mgr create cephtest004

检查结果

[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            1 pools have many more objects per pg than average
            Degraded data redundancy: 170/67719 objects degraded (0.251%), 8 pgs degraded, 20 pgs undersized
            15 pgs not deep-scrubbed in time
            15 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum cephtest001,cephtest002,cephtest003,cephtest004 (age 15m)
    mgr: cephtest001(active, since 5w), standbys: cephtest002, cephtest003, cephtest004
    osd: 14 osds: 14 up (since 4w), 14 in (since 4w)
         flags noin,nobackfill
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 22.57k objects, 161 GiB
    usage:   497 GiB used, 52 TiB / 52 TiB avail
    pgs:     170/67719 objects degraded (0.251%)
             380 active+clean
             12  active+undersized
             8   active+undersized+degraded

  io:
    client:   58 KiB/s rd, 2.1 KiB/s wr, 65 op/s rd, 0 op/s wr

[cephadmin@cephtest001 cephcluster]$

可见 mon,mgr ,rgw 都有cephtest004
备注:我原先的集群里没有rgw ,这块 后面要跟踪了解下。

3.2.7 修改/home/cephadmin/cephcluster/ceph.conf

先备份该文件,然后修改

 cp ceph.conf ceph.conf.bak.3node.20210201
[cephadmin@cephtest001 cephcluster]$ cat ceph.conf
[global]
fsid = 6cd05235-66dd-4929-b697-1562d308d5c3
mon_initial_members = cephtest001, cephtest002, cephtest003, cephtest004
mon_host = 10.3.176.10,10.3.176.16,10.3.176.44,10.3.176.36
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 10.3.176.0/22
cluster network = 10.3.176.0/22


仅修改了此处。

3.2.8 推送 修改/home/cephadmin/cephcluster/ceph.conf 到四台节点(部署节点执行)

ceph-deploy --overwrite-conf admin cephtest001 cephtest002 cephtest003 cephtest004

3.2.9 修改 /etc/ceph 目录权限(所有节点执行)

sudo chown -R cephadmin:cephadmin /etc/ceph

3.2.10 列出新节点所有可用磁盘

两种办法
方法一 部署节点执行


[cephadmin@cephtest001 ceph]$ ceph-deploy disk list cephtest004
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk list cephtest004
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff5ca4b6c20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['cephtest004']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7ff5ca909cf8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[cephtest004][DEBUG ] connection detected need for sudo
[cephtest004][DEBUG ] connected to host: cephtest004
[cephtest004][DEBUG ] detect platform information from remote host
[cephtest004][DEBUG ] detect machine type
[cephtest004][DEBUG ] find the location of an executable
[cephtest004][INFO  ] Running command: sudo fdisk -l
[cephtest004][INFO  ] Disk /dev/sda: 1999.8 GB, 1999844147200 bytes, 3905945600 sectors
[cephtest004][INFO  ] Disk /dev/sdb: 4000.2 GB, 4000225165312 bytes, 7812939776 sectors
[cephtest004][INFO  ] Disk /dev/sdd: 4000.2 GB, 4000225165312 bytes, 7812939776 sectors
[cephtest004][INFO  ] Disk /dev/sdc: 4000.2 GB, 4000225165312 bytes, 7812939776 sectors
[cephtest004][INFO  ] Disk /dev/sde: 4000.2 GB, 4000225165312 bytes, 7812939776 sectors
[cephtest004][INFO  ] Disk /dev/mapper/centos-root: 1879.0 GB, 1879048192000 bytes, 3670016000 sectors
[cephtest004][INFO  ] Disk /dev/mapper/centos-swap: 4294 MB, 4294967296 bytes, 8388608 sectors
[cephtest004][INFO  ] Disk /dev/mapper/centos-home: 107.4 GB, 107374182400 bytes, 209715200 sectors
[cephadmin@cephtest001 ceph]$

方法二 新节点执行

[root@cephtest004 ceph]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  1.8T  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0  1.8T  0 part
  ├─centos-root 253:0    0  1.7T  0 lvm  /
  ├─centos-swap 253:1    0    4G  0 lvm  [SWAP]
  └─centos-home 253:2    0  100G  0 lvm  /home
sdb               8:16   0  3.7T  0 disk
sdc               8:32   0  3.7T  0 disk
sdd               8:48   0  3.7T  0 disk
sde               8:64   0  3.7T  0 disk
[root@cephtest004 ceph]#

3.2.11 添加新的osd到存储池(部署节点)

[cephadmin@cephtest001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@cephtest001 cephcluster]$ for dev in /dev/sdb /dev/sdc  /dev/sdd  /dev/sde
> do
> ceph-deploy disk zap cephtest004 $dev
> ceph-deploy osd create cephtest004 --data $dev
> done

检查:

[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            1 pools have many more objects per pg than average
            Degraded data redundancy: 1597/67719 objects degraded (2.358%), 4 pgs degraded, 9 pgs undersized
            15 pgs not deep-scrubbed in time
            15 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum cephtest001,cephtest002,cephtest003,cephtest004 (age 43m)
    mgr: cephtest001(active, since 5w), standbys: cephtest002, cephtest003, cephtest004
    osd: 18 osds: 18 up (since 9s), 14 in (since 4w); 67 remapped pgs
         flags noin,nobackfill
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 22.57k objects, 161 GiB
    usage:   499 GiB used, 59 TiB / 60 TiB avail
    pgs:     1597/67719 objects degraded (2.358%)
             9434/67719 objects misplaced (13.931%)
             325 active+clean
             32  active+clean+remapped
             24  active+remapped+backfill_wait
             7   active+undersized
             7   active+remapped+backfilling
             2   active+undersized+degraded
             1   active+recovering+undersized+remapped
             1   active+recovery_wait+undersized+degraded+remapped
             1   active+undersized+degraded+remapped+backfill_wait

  io:
    client:   681 B/s wr, 0 op/s rd, 0 op/s wr
    recovery: 24 MiB/s, 10 objects/s

[cephadmin@cephtest001 cephcluster]$


osd: 18 osds: 18 up (since 9s), 14 in (since 4w); 67 remapped pgs
flags noin,nobackfill
可见新增了

3.2.12 cancel the mark during rush hour(部署节点)

When user access is off-peak, these flags are removed and the cluster begins to balance tasks.

[cephadmin@cephtest001 cephcluster]$ ceph osd unset noin
noin is unset
[cephadmin@cephtest001 cephcluster]$ ceph osd unset nobackfill
nobackfill is unset

检查

[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     6cd05235-66dd-4929-b697-1562d308d5c3
    health: HEALTH_WARN
            1 pools have many more objects per pg than average
            Degraded data redundancy: 75/67719 objects degraded (0.111%), 3 pgs degraded, 10 pgs undersized
            15 pgs not deep-scrubbed in time
            15 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum cephtest001,cephtest002,cephtest003,cephtest004 (age 2h)
    mgr: cephtest001(active, since 5w), standbys: cephtest002, cephtest003, cephtest004
    osd: 18 osds: 18 up (since 102m), 14 in (since 4w); 65 remapped pgs
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   8 pools, 400 pgs
    objects: 22.57k objects, 161 GiB
    usage:   499 GiB used, 59 TiB / 60 TiB avail
    pgs:     75/67719 objects degraded (0.111%)
             9415/67719 objects misplaced (13.903%)
             327 active+clean
             32  active+clean+remapped
             21  active+remapped+backfill_wait
             10  active+remapped+backfilling
             7   active+undersized
             2   active+undersized+degraded
             1   active+undersized+degraded+remapped+backfilling

  io:
    client:   60 KiB/s rd, 66 op/s rd, 0 op/s wr
    recovery: 2.0 MiB/s, 0 objects/s

[cephadmin@cephtest001 cephcluster]$

3.2.13 ceph osd 加入集群

osd: 18 osds: 18 up (since 102m), 14 in (since 4w); 65 remapped pgs
可以看到目前是18up 18in,新加的4个osd 已经运行但是没有加到集群里。在

把一个 osd 加入集群,即上线一个 osd,即是把14号osd加入集群
ceph osd in 14

[cephadmin@cephtest001 cephcluster]$ ceph osd in 14
marked in osd.14.
[cephadmin@cephtest001 cephcluster]$ ceph osd in 15
marked in osd.15.
[cephadmin@cephtest001 cephcluster]$ ceph osd in 16
marked in osd.16.
[cephadmin@cephtest001 cephcluster]$ ceph osd in 17

Ceph Nautilus是一个开源的分布式存储系统,它旨在提供灵活和可扩展的存储解决方案。通过Nautilus,用户可以构建自己的私有云存储集群,用于存储和管理大量的数据。 Docker是一种轻量级的容器化技术,它允许用户将应用程序及其依赖项打包到一个独立的、可移植的容器中,然后在不同的环境中运行。Docker提供了一种快速部署、扩展和管理应用程序的方式。 关于"Ceph Nautilus Docker",可以理解为将Ceph Nautilus与Docker容器技术结合使用。这样做的好处在于,可以利用Docker的便捷性和灵活性来部署和管理Ceph Nautilus集群。 首先,在使用Docker时,可以将Ceph Nautilus的各个组件(如monitor、OSD等)打包成Docker镜像,并通过Docker容器来运行这些组件。这样一来,不仅能够节省部署和配置的时间,还能更好地实现Ceph集群的扩展。 其次,使用Docker还可以简化Ceph Nautilus的运维和管理。通过使用Docker容器,可以更加方便地进行集群的备份、迁移和扩展,也可以更加灵活地进行资源的调度和分配。此外,Docker还提供了一些现成的容器管理工具,如Docker Swarm和Kubernetes,可以进一步简化Ceph集群的管理和扩展。 总结起来,Ceph Nautilus和Docker的结合可以带来许多好处,包括更方便的部署、更高效的扩展和更灵活的管理。然而,要正确使用这种组合,需要一定的技术和经验。因此,在使用之前,建议进行充分的学习和测试,以确保能够正确地部署和管理Ceph Nautilus Docker集群。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值