linux (centos 7.6)生产环境横向扩容ceph 集群

零 修订记录

序号修订内容修订时间
1新增20220125

一 摘要

笔者前期搭建了三台ceph 集群,随着时间推移,可用存储逐渐减少,遂决定横向扩容一台。本文主要记录横向扩容过程。

二 环境信息

(一)硬件信息

2.1.1 服务器信息

| 主机名 |品牌型号 |机器配置 |数量|
| ---- | ---- | ---- | ---- | ----|
| proceph04.pro.kxdigit.com | 浪潮 SA5212M5 | 42102/128G/SSD:240G2 960G2 /SAS:8T 7.2K 6 /10G X7102/1G PHY卡1/RAID卡 SAS3108 2GB |1|

2.1.2 交换机信息

两台相同配置的交换机配置堆叠。

交换机名称品牌型号机器配置数量
A3_1F_DC_openstack_test_jieru_train-irf_b02&b03H3CLS-6860-54HF10G 光口48,40g 光口62

(二)操作系统

操作系统使用centos 7.6.1810 64 位

[root@localhost vlan]# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)
[root@localhost vlan]#

(三)ceph 信息

三 实施

(一)部署规划

3.1.1 部署网络规划

主机端物理接口网卡名称绑定IP地址交换机接口绑定模式VLAN备注
proceph04万兆光口1enp59s0f1mode4bond0:10.3.140.34B02.40U16BAGG16/LACPaccess140API管理
proceph04万兆光口3enp175s0f1mode4B03.40U16BAGG16/LACPaccess140API管理
proceph04万兆光口2enp59s0f0mode4bond1: 10.3.141.34B02.40U40BAGG31/LACPaccess141存储专用网络
proceph04万兆光口4enp175s0f0mode4B03.40U40BAGG40/LACPaccess141存储专用网络

3.1.2 部署节点功能规划

| 主机名 | IP |磁盘 |角色|
| ---- | ---- | ---- | ---- | ---- |—|
| proceph04.pro.kxdigit.com | 10.3.140.34 | 系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg |monitor,mgr,mds,osd|

3.1.3 raid 特别说明

系统盘做raid1,数据盘 每张盘单独做raid0,共六张数据盘,做六次raid0;

(二)部署准备(三台节点都需实施)

3.2.1-3.2.5 详细操作请参考linux 基于三台物理机安装ceph nautilus
linux (centos7) 使用ceph-deploy 安装ceph

3.2.1 配置bond0

[root@localhost network-scripts]# cat ifcfg-bond0
DEVICE=bond0
TYPE=Bond
ONBOOT=yes
BOOTPROTO=static
BONDING_MASTER=yes
BONDING_OPTS="xmit_hash_policy=layer3+4 mode=4 miimon=80"

IPADDR=10.3.140.34
PREFIX=24
GATEWAY=10.3.140.1
[root@localhost network-scripts]#


ifcfg-enp59s0f1

[root@localhost network-scripts]# cat ifcfg-enp59s0f1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp59s0f1
UUID=e6ad78eb-da8f-461d-b4da-51e34c65ffce
DEVICE=enp59s0f1
ONBOOT=yes

MASTER=bond0
SLAVE=yes
[root@localhost network-scripts]#

[root@localhost network-scripts]# cat ifcfg-enp175s0f1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp175s0f1
UUID=74419515-b8af-449c-9991-d11b71bcfd25
DEVICE=enp175s0f1
ONBOOT=yes

MASTER=bond0
SLAVE=yes
[root@localhost network-scripts]#

3.2.2 关闭动态路由

机器配置双地址后,如果不关闭动态路由,则只能对外使用一个地址。即路由表里第一条默认路由对应的地址。

echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter

echo 0 > /proc/sys/net/ipv4/conf/bond0/rp_filter

echo 0 > /proc/sys/net/ipv4/conf/bond1/rp_filter

永久关闭动态路由

[root@localhost etc]# cp /etc/sysctl.conf /etc/sysctl.conf.bak.orig
[root@localhost etc]# vim /etc/sysctl.conf


# close dynamic route for 2 IP

net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.bond0.rp_filter = 0
net.ipv4.conf.bond1.rp_filter = 0

然后重启系统

3.2.3 配置bond1

[root@localhost network-scripts]# cat ifcfg-bond1
DEVICE=bond1
TYPE=Bond
ONBOOT=yes
BOOTPROTO=static
BONDING_MASTER=yes
BONDING_OPTS="xmit_hash_policy=layer3+4 mode=4 miimon=80"

IPADDR=10.3.141.34
PREFIX=24
GATEWAY=10.3.141.1
[root@localhost network-scripts]#

[root@localhost network-scripts]# cat ifcfg-enp59s0f0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp59s0f0
UUID=3383a4ce-5eb6-4b4d-a33c-86e7c0e097bd
DEVICE=enp59s0f0
ONBOOT=yes

MASTER=bond1
SLAVE=yes
[root@localhost network-scripts]#

[root@localhost network-scripts]# cat ifcfg-enp175s0f0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp175s0f0
UUID=88e3d46e-ffd8-4e74-ba5d-01a4ee413da0
DEVICE=enp175s0f0
ONBOOT=yes

MASTER=bond1
SLAVE=yes
[root@localhost network-scripts]#

3.2.4 配置dns

基于ansible-playbook 完成

[dev@10-3-170-32 base]$ ansible-playbook modifydns.yml

dns 服务器上配置dns

域名解析地址
proceph04.pro.kxdigit.com10.3.140.34

3.2.5 修改ssh 配置文件

因为配置了dns,默认ssh 登录时会用到dns,这样ssh 登录时会很慢,

[root@localhost ssh]# cp sshd_config sshd_config.bak.orig
[root@localhost ssh]# vim sshd_config
[root@localhost ssh]# systemctl restart sshd
[root@localhost ssh]#

关闭默认即可

#UseDNS yes
UseDNS no

3.2.6 配置yum 源

基于ansible-playbook 完成
更新操作系统源

[dev@10-3-170-32 base]$ ansible-playbook updateyum.yml

更新ceph 源

[dev@10-3-170-32 base]$ ansible-playbook updatecephyum.yml

3.2.7 配置时间服务器

基于ansible-playbook 完成

[dev@10-3-170-32 base]$ ansible-playbook modifychronyclient.yml

3.2.8 配置hosts 文件

所有节点上都要修改
/etc/hosts 文件 新增如下配置

10.3.140.31 proceph01 proceph01.pro.kxdigit.com
10.3.140.32 proceph02 proceph02.pro.kxdigit.com
10.3.140.33 proceph03 proceph03.pro.kxdigit.com
10.3.140.34 proceph04 proceph04.pro.kxdigit.com

3.2.9 关闭防火墙关闭selinux

[dev@10-3-170-32 base]$ ansible-playbook closefirewalldandselinux.yml

3.2.10 设置机器名

[dev@10-3-170-32 base]$ ssh root@10.3.140.34
Last login: Tue Jan 25 11:56:52 2022 from 10.3.170.32
[root@localhost ~]# hostnamectl set-hostname proceph04.pro.kxdigit.com
[root@localhost ~]# exit
登出
Connection to 10.3.140.34 closed.
[dev@10-3-170-32 base]$ ssh root@10.3.140.34
Last login: Tue Jan 25 11:57:21 2022 from 10.3.170.32
[root@proceph04 ~]#

3.2.11创建部署用户cephadmin

三台节点都要创建该用户,并设置sudo

[root@proceph04 ~]# useradd cephadmin
[root@proceph04 ~]# echo "cephnau@2020" | passwd --stdin cephadmin
更改用户 cephadmin 的密码 。
passwd:所有的身份验证令牌已经成功更新。
[root@proceph04 ~]# echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
cephadmin ALL = (root) NOPASSWD:ALL
[root@proceph04 ~]# chmod 0440 /etc/sudoers.d/cephadmin
[root@proceph04 ~]#


3.2.12 配置cephadmin 用户免密登录

部署节点需要免密登录到待扩容节点

[cephadmin@proceph01 ~]$ ssh-copy-id cephadmin@proceph04.pro.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'proceph04.pro.kxdigit.com (10.3.140.34)' can't be established.
ECDSA key fingerprint is SHA256:PjAYw+ImEPNcJYSKDYuWQSN52x1HaCVif7u0W9eAzYk.
ECDSA key fingerprint is MD5:87:d6:06:e4:e8:8b:8b:bb:97:6c:c8:03:75:10:2f:04.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@proceph04.pro.kxdigit.com's password:
Permission denied, please try again.
cephadmin@proceph04.pro.kxdigit.com's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@proceph04.pro.kxdigit.com'"
and check to make sure that only the key(s) you wanted were added.

(三)扩容

3.3.1 扩容前检查集群状态

扩容前检查当前ceph 集群状态

[cephadmin@proceph01 ~]$ ceph -s
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            3 pgs not deep-scrubbed in time
            5 pgs not scrubbed in time

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 4d)
    mgr: proceph01(active, since 7w), standbys: proceph03, proceph02
    osd: 18 osds: 18 up (since 4d), 18 in (since 7M)

  data:
    pools:   1 pools, 512 pgs
    objects: 7.86M objects, 30 TiB
    usage:   88 TiB used, 42 TiB / 131 TiB avail
    pgs:     508 active+clean
             4   active+clean+scrubbing+deep

  io:
    client:   950 KiB/s rd, 22 MiB/s wr, 4 op/s rd, 1.01k op/s wr

[cephadmin@proceph01 ~]$
[cephadmin@proceph01 ~]$ ceph health detail
HEALTH_WARN 3 pgs not deep-scrubbed in time; 5 pgs not scrubbed in time
PG_NOT_DEEP_SCRUBBED 3 pgs not deep-scrubbed in time
    pg 1.d8 not deep-scrubbed since 2022-01-12 11:18:40.401541
    pg 1.119 not deep-scrubbed since 2022-01-13 05:06:35.160832
    pg 1.178 not deep-scrubbed since 2022-01-12 23:06:13.053718
PG_NOT_SCRUBBED 5 pgs not scrubbed in time
    pg 1.1a7 not scrubbed since 2022-01-14 10:36:48.319171
    pg 1.6 not scrubbed since 2022-01-15 02:14:01.505718
    pg 1.cc not scrubbed since 2022-01-14 17:17:48.107377
    pg 1.16b not scrubbed since 2022-01-14 07:06:36.964203
    pg 1.183 not scrubbed since 2022-01-14 12:58:02.770705
[cephadmin@proceph01 ~]$

3.3.2 关闭原集群data backfilling

[cephadmin@proceph01 ~]$ ceph osd set noin
noin is set
[cephadmin@proceph01 ~]$ ceph osd set nobackfill
nobackfill is set
[cephadmin@proceph01 ~]$

检查

[cephadmin@proceph01 ~]$ ceph -s
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            3 pgs not deep-scrubbed in time
            5 pgs not scrubbed in time

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 4d)
    mgr: proceph01(active, since 7w), standbys: proceph03, proceph02
    osd: 18 osds: 18 up (since 4d), 18 in (since 7M)
         flags noin,nobackfill

  data:
    pools:   1 pools, 512 pgs
    objects: 7.86M objects, 30 TiB
    usage:   88 TiB used, 42 TiB / 131 TiB avail
    pgs:     508 active+clean
             4   active+clean+scrubbing+deep

  io:
    client:   1.2 MiB/s rd, 20 MiB/s wr, 4 op/s rd, 1.04k op/s wr

[cephadmin@proceph01 ~]$

3.3.3 扩容节点安装ceph

扩容节点安装

[cephadmin@proceph04 ~]$ sudo yum -y install ceph ceph-radosgw
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package ceph.x86_64 2:14.2.15-0.el7 will be installed

检查

[cephadmin@proceph04 ~]$ ceph -v
ceph version 14.2.15 (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)
[cephadmin@proceph04 ~]$

3.3.4 新节点监视器添加到现有集群(部署节点执行)

命令执行目录:
/home/cephadmin/cephcluster

[cephadmin@proceph01 cephcluster]$ ceph-deploy --overwrite-conf mon add proceph04 --address 10.3.140.34
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy --overwrite-conf mon add proceph04 --address 10.3.140.34

检查

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            3 pgs not deep-scrubbed in time
            5 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum proceph01,proceph02,proceph03,proceph04 (age 27s)
    mgr: proceph01(active, since 7w), standbys: proceph03, proceph02
    osd: 18 osds: 18 up (since 4d), 18 in (since 7M)
         flags noin,nobackfill

  data:
    pools:   1 pools, 512 pgs
    objects: 7.86M objects, 30 TiB
    usage:   88 TiB used, 42 TiB / 131 TiB avail
    pgs:     508 active+clean
             4   active+clean+scrubbing+deep

  io:
    client:   838 KiB/s rd, 41 MiB/s wr, 3 op/s rd, 1.16k op/s wr

[cephadmin@proceph01 cephcluster]$

3.3.5 新节点扩展mgr加到现有集群(部署节点执行)

[cephadmin@proceph01 cephcluster]$ ceph-deploy --overwrite-conf mgr create proceph04

检查

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            3 pgs not deep-scrubbed in time
            5 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum proceph01,proceph02,proceph03,proceph04 (age 2m)
    mgr: proceph01(active, since 7w), standbys: proceph03, proceph02, proceph04
    osd: 18 osds: 18 up (since 4d), 18 in (since 7M)
         flags noin,nobackfill

  data:
    pools:   1 pools, 512 pgs
    objects: 7.86M objects, 30 TiB
    usage:   88 TiB used, 42 TiB / 131 TiB avail
    pgs:     508 active+clean
             4   active+clean+scrubbing+deep

  io:
    client:   1.0 MiB/s rd, 43 MiB/s wr, 3 op/s rd, 997 op/s wr

[cephadmin@proceph01 cephcluster]$

3.3.6 修改/home/cephadmin/cephcluster/ceph.conf

[cephadmin@proceph01 cephcluster]$ cp ceph.conf ceph.conf.bak.3node.20220125
[cephadmin@proceph01 cephcluster]$ vi ceph.conf
[cephadmin@proceph01 cephcluster]$ cat ceph.conf
[global]
fsid = 9cdee1f8-f168-4151-82cd-f6591855ccbe
mon_initial_members = proceph01, proceph02, proceph03, proceph04
mon_host = 10.3.140.31,10.3.140.32,10.3.140.33,10.3.140.34
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx


public network = 10.3.140.0/24
cluster network = 10.3.141.0/24
[cephadmin@proceph01 cephcluster]$

3.3.7 推送 修改/home/cephadmin/cephcluster/ceph.conf 到四台节点(部署节点执行)

[cephadmin@proceph01 cephcluster]$ ceph-deploy --overwrite-conf admin proceph01 proceph02 proceph03 proceph04
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf

3.3.8 修改 /etc/ceph 目录权限(所有节点执行)

以一台节点示例

[cephadmin@proceph01 cephcluster]$ ssh proceph04
Last login: Tue Jan 25 14:26:34 2022 from 10.3.140.31
[cephadmin@proceph04 ~]$ sudo chown -R cephadmin:cephadmin /etc/ceph
[cephadmin@proceph04 ~]$ ceph -s
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            3 pgs not deep-scrubbed in time
            5 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum proceph01,proceph02,proceph03,proceph04 (age 10m)
    mgr: proceph01(active, since 7w), standbys: proceph03, proceph02, proceph04
    osd: 18 osds: 18 up (since 4d), 18 in (since 7M)
         flags noin,nobackfill

  data:
    pools:   1 pools, 512 pgs
    objects: 7.86M objects, 30 TiB
    usage:   88 TiB used, 42 TiB / 131 TiB avail
    pgs:     508 active+clean
             4   active+clean+scrubbing+deep

  io:
    client:   15 MiB/s rd, 27 MiB/s wr, 115 op/s rd, 965 op/s wr

3.3.9 添加新节点osd

首先通过lsblk 看各节点上硬盘情况,然后通过
for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
do
ceph-deploy disk zap proceph04 $dev
ceph-deploy osd create proceph04 --data $dev
done
添加osd

3.3.9.1 检查新节点有多少硬盘
[cephadmin@proceph04 ~]$ lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 223.1G  0 disk
├─sda1            8:1    0   200M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 221.9G  0 part
  ├─centos-root 253:0    0 217.9G  0 lvm  /
  └─centos-swap 253:1    0     4G  0 lvm  [SWAP]
sdb               8:16   0   7.3T  0 disk
sdc               8:32   0   7.3T  0 disk
sdd               8:48   0   7.3T  0 disk
sde               8:64   0   7.3T  0 disk
sdf               8:80   0   7.3T  0 disk
sdg               8:96   0   7.3T  0 disk
[cephadmin@proceph04 ~]$

3.3.9.2 添加新节点osd(部署节点执行)
[cephadmin@proceph01 cephcluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> do
> ceph-deploy disk zap proceph04 $dev
> ceph-deploy osd create proceph04 --data $dev
> done
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk zap proceph04 /dev/sdb

检查

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            noin,nobackfill flag(s) set
            3 pgs not deep-scrubbed in time
            5 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum proceph01,proceph02,proceph03,proceph04 (age 17m)
    mgr: proceph01(active, since 7w), standbys: proceph03, proceph02, proceph04
    osd: 24 osds: 24 up (since 75s), 18 in (since 7M); 239 remapped pgs
         flags noin,nobackfill

  data:
    pools:   1 pools, 512 pgs
    objects: 7.86M objects, 30 TiB
    usage:   88 TiB used, 86 TiB / 175 TiB avail
    pgs:     4472732/23594691 objects misplaced (18.957%)
             270 active+clean
             226 active+remapped+backfill_wait
             13  active+remapped+backfilling
             3   active+clean+scrubbing+deep

  io:
    client:   1017 KiB/s rd, 43 MiB/s wr, 2 op/s rd, 1.40k op/s wr

[cephadmin@proceph01 cephcluster]$

3.3.10 关闭noin nonobackfill (部署节点 业务低峰执行)

[cephadmin@proceph01 cephcluster]$ ceph osd unset noin
noin is unset
[cephadmin@proceph01 cephcluster]$ ceph osd unset nobackfill
nobackfill is unset
[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            3 pgs not deep-scrubbed in time
            5 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum proceph01,proceph02,proceph03,proceph04 (age 19m)
    mgr: proceph01(active, since 7w), standbys: proceph03, proceph02, proceph04
    osd: 24 osds: 24 up (since 2m), 18 in (since 7M); 239 remapped pgs

  data:
    pools:   1 pools, 512 pgs
    objects: 7.86M objects, 30 TiB
    usage:   88 TiB used, 50 TiB / 138 TiB avail
    pgs:     4472775/23594934 objects misplaced (18.957%)
             270 active+clean
             226 active+remapped+backfill_wait
             13  active+remapped+backfilling
             3   active+clean+scrubbing+deep

  io:
    client:   773 KiB/s rd, 51 MiB/s wr, 2 op/s rd, 1.09k op/s wr

[cephadmin@proceph01 cephcluster]$

3.3.11 ceph osd 加入集群

[cephadmin@proceph01 cephcluster]$ ceph osd tree
ID CLASS WEIGHT    TYPE NAME          STATUS REWEIGHT PRI-AFF
-1       174.64526 root default
-3        43.66132     host proceph01
 0   hdd   7.27689         osd.0          up  1.00000 1.00000
 1   hdd   7.27689         osd.1          up  1.00000 1.00000
 2   hdd   7.27689         osd.2          up  1.00000 1.00000
 3   hdd   7.27689         osd.3          up  1.00000 1.00000
 4   hdd   7.27689         osd.4          up  1.00000 1.00000
 5   hdd   7.27689         osd.5          up  1.00000 1.00000
-5        43.66132     host proceph02
 6   hdd   7.27689         osd.6          up  1.00000 1.00000
 7   hdd   7.27689         osd.7          up  1.00000 1.00000
 8   hdd   7.27689         osd.8          up  1.00000 1.00000
 9   hdd   7.27689         osd.9          up  1.00000 1.00000
10   hdd   7.27689         osd.10         up  1.00000 1.00000
11   hdd   7.27689         osd.11         up  1.00000 1.00000
-7        43.66132     host proceph03
12   hdd   7.27689         osd.12         up  1.00000 1.00000
13   hdd   7.27689         osd.13         up  1.00000 1.00000
14   hdd   7.27689         osd.14         up  1.00000 1.00000
15   hdd   7.27689         osd.15         up  1.00000 1.00000
16   hdd   7.27689         osd.16         up  1.00000 1.00000
17   hdd   7.27689         osd.17         up  1.00000 1.00000
-9        43.66132     host proceph04
18   hdd   7.27689         osd.18         up        0 1.00000
19   hdd   7.27689         osd.19         up        0 1.00000
20   hdd   7.27689         osd.20         up        0 1.00000
21   hdd   7.27689         osd.21         up        0 1.00000
22   hdd   7.27689         osd.22         up        0 1.00000
23   hdd   7.27689         osd.23         up        0 1.00000
[cephadmin@proceph01 cephcluster]$ ceph osd in 18
marked in osd.18.
[cephadmin@proceph01 cephcluster]$ ceph osd in 19
marked in osd.19.
[cephadmin@proceph01 cephcluster]$ ceph osd in 20
marked in osd.20.
[cephadmin@proceph01 cephcluster]$ ceph osd in 21
marked in osd.21.
[cephadmin@proceph01 cephcluster]$ ceph osd in 22
marked in osd.22.
[cephadmin@proceph01 cephcluster]$ ceph osd in 23
marked in osd.23.
[cephadmin@proceph01 cephcluster]$

检查

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            3 pgs not deep-scrubbed in time
            5 pgs not scrubbed in time

  services:
    mon: 4 daemons, quorum proceph01,proceph02,proceph03,proceph04 (age 21m)
    mgr: proceph01(active, since 7w), standbys: proceph03, proceph02, proceph04
    osd: 24 osds: 24 up (since 4m), 24 in (since 28s); 373 remapped pgs

  data:
    pools:   1 pools, 512 pgs
    objects: 7.86M objects, 30 TiB
    usage:   89 TiB used, 86 TiB / 175 TiB avail
    pgs:     7654051/23594952 objects misplaced (32.439%)
             370 active+remapped+backfill_wait
             139 active+clean
             2   active+remapped+backfilling
             1   active+remapped

  io:
    client:   1.1 MiB/s rd, 25 MiB/s wr, 3 op/s rd, 1.63k op/s wr
    recovery: 34 MiB/s, 8 objects/s

[cephadmin@proceph01 cephcluster]$

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值