ceph 测试环境搭建记录

cehp 测试环境搭建

0

此文主要是记录操作过程,详细操作请参考其他文献

1.安装操作系统

所有机器最小化安装centos7.6
系统盘raid1, 数据盘 单盘raid0,比如有两张数据盘,每张盘单独做raid0.

2.部署规划

cephtest001,cephtest002,cephtest003 一次安装。
cephtest004 用于验证横向扩容测试
在这里插入图片描述

3 系统初始化

3.1 修改dns

ansible-playbook modifydns.yml

3.2 修改ssh

关闭dns,方法略

3.3 配置yum

[dev@10-3-170-32 base]$ ansible-playbook updateyum.yml
[dev@10-3-170-32 base]$ ansible-playbook updatecephyum.yml

3.4 dns 系统配置域名

若内网有dns 系统,可配置。

3.5 主机hosts 文件里添加域名

此处未做,我用的是内网dns 解析。

3.6 设置机器名

每台机器都要做

[dev@10-3-170-32 base]$ ssh root@10.3.163.196
Last login: Mon Jan 24 14:13:27 2022 from 10.3.170.32
[root@localhost ~]# ping cephtest001.testceph.kxdigit.com
PING cephtest001.testceph.kxdigit.com (10.3.163.196) 56(84) bytes of data.
64 bytes from localhost.localdomain (10.3.163.196): icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from localhost.localdomain (10.3.163.196): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from localhost.localdomain (10.3.163.196): icmp_seq=3 ttl=64 time=0.023 ms
^C
--- cephtest001.testceph.kxdigit.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 11010ms
rtt min/avg/max/mdev = 0.023/0.038/0.053/0.012 ms
[root@localhost ~]# hostnamectl set-hostname cephtest001.testceph.kxdigit.com
[root@localhost ~]#

3.7 关闭防火墙和selinux

[dev@10-3-170-32 base]$ ansible-playbook closefirewalldandselinux.yml

3.8 配置时间服务器

[dev@10-3-170-32 base]$ ansible-playbook modifychronyclient.yml

四 部署ceph

此文只部署前三台

4.1 创建部署用户cephadmin

cephtest001 \cephtest002\cephtest003 三台都要执行

[root@cephtest001 ~]# useradd cephadmin
[root@cephtest001 ~]# echo "cephnau@2020" | passwd --stdin cephadmin
更改用户 cephadmin 的密码 。
passwd:所有的身份验证令牌已经成功更新。
[root@cephtest001 ~]# echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
cephadmin ALL = (root) NOPASSWD:ALL
[root@cephtest001 ~]# chmod 0440 /etc/sudoers.d/cephadmin
[root@cephtest001 ~]#

4.2 配置cephadmin 用户免密登录

从部署节点上做免密到所有ceph 节点
我这里部署节点就是cephtest001,所以要从该机器上做免密都cephtest001 \cephtest002\cephtest003 三台机器上。

[dev@10-3-170-32 base]$ ssh root@cephtest001.testceph.kxdigit.com
Last login: Mon Jan 24 14:28:58 2022 from 10.3.170.32
[root@cephtest001 ~]# su - cehpadmin
su: user cehpadmin does not exist
[root@cephtest001 ~]# su - cephadmin
[cephadmin@cephtest001 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephadmin/.ssh/id_rsa):
Created directory '/home/cephadmin/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephadmin/.ssh/id_rsa.
Your public key has been saved in /home/cephadmin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:SPulxv/s4m0rzUqH5MOY6GKWUtfVtucgcrSw/cbf684 cephadmin@cephtest001.testceph.kxdigit.com
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|      .    .     |
|     . o. o o    |
|      o.S*oo .   |
|    . .++O=.o .  |
|   . o. Bo**.+   |
|  . =. . oo=B o. |
|   + ..  .+O*o+Eo|
+----[SHA256]-----+
[cephadmin@cephtest001 ~]$ ssh-copy-id cephadmin@cephtest001.testceph.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'cephtest001.testceph.kxdigit.com (10.3.163.196)' can't be established.
ECDSA key fingerprint is SHA256:0Qn8GSu3aNyB0QDDF1c59yLfvsonB1Vp8/jg057MC5A.
ECDSA key fingerprint is MD5:9c:68:35:21:ed:0c:1e:66:d2:3c:1c:80:6b:2e:56:40.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@cephtest001.testceph.kxdigit.com's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@cephtest001.testceph.kxdigit.com'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@cephtest001 ~]$ ssh-copy-id cephadmin@cephtest002.testceph.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'cephtest002.testceph.kxdigit.com (10.3.163.92)' can't be established.
ECDSA key fingerprint is SHA256:8JoLUtcEJj2kXNZoll3F0+vPdSax5rNOYOAfAaB7Cu4.
ECDSA key fingerprint is MD5:f4:14:ca:bc:77:15:7f:d8:ea:66:d8:d8:14:2d:ef:a9.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@cephtest002.testceph.kxdigit.com's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@cephtest002.testceph.kxdigit.com'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@cephtest001 ~]$ ssh-copy-id cephadmin@cephtest003.testceph.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'cephtest003.testceph.kxdigit.com (10.3.163.115)' can't be established.
ECDSA key fingerprint is SHA256:PAX2oaSYBHb+UhDHcB5ZeEHPaVn+gRljSIzYxG+FIh4.
ECDSA key fingerprint is MD5:a4:65:1f:7c:5a:ec:66:ad:d2:91:b3:37:b4:7a:6d:0d.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@cephtest003.testceph.kxdigit.com's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@cephtest003.testceph.kxdigit.com'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@cephtest001 ~]$

4.3 部署ceph

4.3.1 部署节点安装ceph-deploy

在部署节点ceph001 上使用cephadmin 用户安装ceph-deploy

[cephadmin@cephtest001 ~]$ sudo yum -y install ceph-deploy python-pip
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile

[cephadmin@cephtest001 ~]$ ceph-deploy --version
2.0.1
[cephadmin@cephtest001 ~]$

4.3.2 所有节点安装ceph

安装命令已test001 示例

[cephadmin@cephtest001 ~]$ sudo yum -y install ceph ceph-radosgw
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile

[cephadmin@cephtest001 ~]$ ceph -v
ceph version 14.2.15 (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)
[cephadmin@cephtest001 ~]$

4.4 创建集群

在ceph-deploy 部署节点 操作

4.4.1 创建集群

[cephadmin@cephtest001 ~]$ mkdir /home/cephadmin/cephcluster
[cephadmin@cephtest001 ~]$ cd cephcluster/
[cephadmin@cephtest001 cephcluster]$ ceph-deploy new cephtest001 cephtest002 cephtest003
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy new cephtest001 cephtest002 cephtest003
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f8bf669fd70>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8bf601d368>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['cephtest001', 'cephtest002', 'cephtest003']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[cephtest001][DEBUG ] connection detected need for sudo
[cephtest001][DEBUG ] connected to host: cephtest001
[cephtest001][DEBUG ] detect platform information from remote host
[cephtest001][DEBUG ] detect machine type
[cephtest001][DEBUG ] find the location of an executable
[cephtest001][INFO  ] Running command: sudo /usr/sbin/ip link show
[cephtest001][INFO  ] Running command: sudo /usr/sbin/ip addr show
[cephtest001][DEBUG ] IP addresses found: [u'10.3.163.196']

生成如下配置文件

[cephadmin@cephtest001 cephcluster]$ ll
total 16
-rw-rw-r--. 1 cephadmin cephadmin  252 Jan 24 15:14 ceph.conf
-rw-rw-r--. 1 cephadmin cephadmin 6586 Jan 24 15:14 ceph-deploy-ceph.log
-rw-------. 1 cephadmin cephadmin   73 Jan 24 15:14 ceph.mon.keyring
[cephadmin@cephtest001 cephcluster]$

修改ceph.conf 增加网络配置
测试环境只用一张网卡,不做bond 等。

[global]
fsid = 76d33994-3ef3-477d-876d-ea2a23eadd17
mon_initial_members = cephtest001, cephtest002, cephtest003
mon_host = 10.3.163.196,10.3.163.92,10.3.163.115
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 10.3.163.0/24
cluster network = 10.3.163.0/24

4.4.2 集群配置初始化,生成所有密钥

[cephadmin@cephtest001 cephcluster]$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mon create-initial

以上命令会生成 秘钥

[cephadmin@cephtest001 cephcluster]$ ls -al
total 76
drwxrwxr-x. 2 cephadmin cephadmin   244 Jan 24 15:25 .
drwx------. 4 cephadmin cephadmin   154 Jan 24 15:23 ..
-rw-------. 1 cephadmin cephadmin   113 Jan 24 15:25 ceph.bootstrap-mds.keyring
-rw-------. 1 cephadmin cephadmin   113 Jan 24 15:25 ceph.bootstrap-mgr.keyring
-rw-------. 1 cephadmin cephadmin   113 Jan 24 15:25 ceph.bootstrap-osd.keyring
-rw-------. 1 cephadmin cephadmin   113 Jan 24 15:25 ceph.bootstrap-rgw.keyring
-rw-------. 1 cephadmin cephadmin   151 Jan 24 15:25 ceph.client.admin.keyring
-rw-rw-r--. 1 cephadmin cephadmin   315 Jan 24 15:23 ceph.conf
-rw-rw-r--. 1 cephadmin cephadmin 48719 Jan 24 15:25 ceph-deploy-ceph.log
-rw-------. 1 cephadmin cephadmin    73 Jan 24 15:14 ceph.mon.keyring
[cephadmin@cephtest001 cephcluster]$

4.4.3 配置信息分发到各节点


[cephadmin@cephtest001 cephcluster]$ ceph-deploy admin cephtest001 cephtest002 cephtest003
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy admin cephtest001 cephtest002 cephtest003

切换到root 账号

[cephadmin@cephtest001 cephcluster]$ su -
Password:
Last login: Mon Jan 24 14:47:39 CST 2022 from 10.3.170.32 on pts/0
[root@cephtest001 ~]# ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum cephtest002,cephtest001,cephtest003 (age 7m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[root@cephtest001 ~]#

如果想使用cephadmin 账号执行ceph -s,则需要修改/etc/ceph 目录权限

所有节点执行

[cephadmin@cephtest001 cephcluster]$ ssh cephtest003.testceph.kxdigit.com
Last login: Mon Jan 24 14:49:50 2022 from 10.3.163.196
[cephadmin@cephtest003 ~]$ sudo chown -R cephadmin:cephadmin /etc/ceph
[cephadmin@cephtest003 ~]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum cephtest002,cephtest001,cephtest003 (age 9m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@cephtest003 ~]$

4.4.4 配置osd

三台节点都需要操作,可以直接在部署节点上使用命令。
首先通过lsblk 看各节点上硬盘情况,然后通过
for dev in /dev/vdb
do
ceph-deploy disk zap ceph001 $dev
ceph-deploy osd create ceph001 --data $dev
done
添加osd

4.4.4.1 cephtest001 节点配置osd
[cephadmin@cephtest001 cephcluster]$ lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 558.4G  0 disk
├─sda1            8:1    0   200M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0 557.2G  0 part
  ├─centos-root 253:0    0 553.2G  0 lvm  /
  └─centos-swap 253:1    0     4G  0 lvm  [SWAP]
sdb               8:16   0   1.1T  0 disk
sdc               8:32   0   1.1T  0 disk
sdd               8:48   0   1.1T  0 disk
sde               8:64   0   1.1T  0 disk
sdf               8:80   0   1.1T  0 disk
[cephadmin@cephtest001 cephcluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd  /dev/sde /dev/sdf
> do
> ceph-deploy disk zap cephtest001 $dev
> ceph-deploy osd create cephtest001 --data $dev
> done
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk zap cephtest001 /dev/sdb

检查

[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum cephtest002,cephtest001,cephtest003 (age 18m)
    mgr: no daemons active
    osd: 5 osds: 5 up (since 7s), 5 in (since 7s)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@cephtest001 cephcluster]$

4.4.4.2 cephtest002 节点配置osd

方法同上
注意这个是在部署节点 /home/cephadmin/cephcluster 目录下执行

for dev in /dev/sdb 
do
ceph-deploy disk zap cephtest002 $dev
ceph-deploy osd create cephtest002 --data $dev
done
4.4.4.3 cephtest003 节点配置osd

方法同上

4.4.5 部署mgr

[cephadmin@cephtest001 cephcluster]$ ceph-deploy mgr create cephtest001 cephtest002 cephtest003
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf

4.4.6 安装mgr-dashboard(三台节点都需要安装)

在三台节点都安装,但是目前只在主节点开启。
直接使用yum 安装,下面是cephtest001安装示例,cephtest002 cephtest003 都需要安装。

[cephadmin@cephtest003 ~]$ sudo yum install ceph-mgr-dashboard

4.4.7 开启mgr-dashboard(主节点开启)

[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum cephtest002,cephtest001,cephtest003 (age 27m)
    mgr: cephtest001(active, since 97s), standbys: cephtest002, cephtest003
    osd: 7 osds: 7 up (since 4m), 7 in (since 4m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   7.0 GiB used, 9.1 TiB / 9.1 TiB avail
    pgs:

[cephadmin@cephtest001 cephcluster]$ ceph mgr module enable dashboard
[cephadmin@cephtest001 cephcluster]$ ceph dashboard create-self-signed-cert
Self-signed certificate created
[cephadmin@cephtest001 cephcluster]$ ceph dashboard set-login-credentials admin admin
******************************************************************
***          WARNING: this command is deprecated.              ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated
[cephadmin@cephtest001 cephcluster]$

5 横向扩容

现在向集群里添加cephtest004

5.1 创建部署用户cephadmin

5.2 配置cephadmin 用户免密登录

5.3 扩容准备

5.3.1 扩容前检查

扩容前先检查原先ceph 状态



[cephadmin@cephtest001 ~]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum cephtest002,cephtest001,cephtest003 (age 47m)
    mgr: cephtest001(active, since 20m), standbys: cephtest002, cephtest003
    osd: 7 osds: 7 up (since 24m), 7 in (since 24m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   7.0 GiB used, 9.1 TiB / 9.1 TiB avail
    pgs:

[cephadmin@cephtest001 ~]$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       9.09305 root default
-3       5.45547     host cephtest001
 0   hdd 1.09109         osd.0            up  1.00000 1.00000
 1   hdd 1.09109         osd.1            up  1.00000 1.00000
 2   hdd 1.09109         osd.2            up  1.00000 1.00000
 3   hdd 1.09109         osd.3            up  1.00000 1.00000
 4   hdd 1.09109         osd.4            up  1.00000 1.00000
-5       1.81879     host cephtest002
 5   hdd 1.81879         osd.5            up  1.00000 1.00000
-7       1.81879     host cephtest003
 6   hdd 1.81879         osd.6            up  1.00000 1.00000
[cephadmin@cephtest001 ~]$ ceph health
HEALTH_OK
[cephadmin@cephtest001 ~]$

5.3.2 关闭 data backfilling

部署节点执行

[cephadmin@cephtest001 ~]$ ceph osd set noin
noin is set
[cephadmin@cephtest001 ~]$ ceph osd set nobackfill
nobackfill is set
[cephadmin@cephtest001 ~]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_WARN
            noin,nobackfill flag(s) set

  services:
    mon: 3 daemons, quorum cephtest002,cephtest001,cephtest003 (age 49m)
    mgr: cephtest001(active, since 21m), standbys: cephtest002, cephtest003
    osd: 7 osds: 7 up (since 26m), 7 in (since 26m)
         flags noin,nobackfill

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   7.0 GiB used, 9.1 TiB / 9.1 TiB avail
    pgs:

[cephadmin@cephtest001 ~]$

5.4 扩容节点安装ceph

[cephadmin@cephtest004 ~]$ sudo yum -y install ceph ceph-radosgw
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package ceph.x86_64 2:14.2.15-0.el7 will be installed

[cephadmin@cephtest004 ~]$ ceph -v
ceph version 14.2.15 (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)
[cephadmin@cephtest004 ~]$

5.5 新节点监视器添加到现有集群(部署节点执行)

[cephadmin@cephtest001 cephcluster]$ ceph-deploy --overwrite-conf mon add cephtest004 --address 10.3.163.113
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf

检查

[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_WARN
            noin,nobackfill flag(s) set

  services:
    mon: 4 daemons, quorum cephtest002,cephtest001,cephtest003,cephtest004 (age 48s)
    mgr: cephtest001(active, since 30m), standbys: cephtest002, cephtest003
    osd: 7 osds: 7 up (since 35m), 7 in (since 35m)
         flags noin,nobackfill

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   7.0 GiB used, 9.1 TiB / 9.1 TiB avail
    pgs:

[cephadmin@cephtest001 cephcluster]$

5.6 新节点扩展rgw(部署节点执行 我这里可以不用执行,因为我原先集群就没有安装rgw)

看了proxmox 官网自己的ceph 也没有启动ceph rgw,所以我这里可以不用这一步。


[cephadmin@cephtest001 cephcluster]$ ceph-deploy --overwrite-conf rgw create cephtest004
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf

5.7 新节点扩展mgr(部署节点执行)

[cephadmin@cephtest001 cephcluster]$ ceph-deploy --overwrite-conf mgr create cephtest004
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy --overwrite-conf mgr create cephtest004

检查结果

[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_WARN
            noin,nobackfill flag(s) set

  services:
    mon: 4 daemons, quorum cephtest002,cephtest001,cephtest003,cephtest004 (age 4m)
    mgr: cephtest001(active, since 34m), standbys: cephtest002, cephtest003, cephtest004
    osd: 7 osds: 7 up (since 38m), 7 in (since 38m)
         flags noin,nobackfill
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   4 pools, 128 pgs
    objects: 187 objects, 1.2 KiB
    usage:   7.0 GiB used, 9.1 TiB / 9.1 TiB avail
    pgs:     128 active+clean

[cephadmin@cephtest001 cephcluster]$

5.8 修改/home/cephadmin/cephcluster/ceph.conf


[cephadmin@cephtest001 cephcluster]$ cp ceph.conf ceph.conf.bak.3node.20220124
[cephadmin@cephtest001 cephcluster]$ vim ceph.conf
[cephadmin@cephtest001 cephcluster]$ cat ceph.conf
[global]
fsid = 76d33994-3ef3-477d-876d-ea2a23eadd17
mon_initial_members = cephtest001, cephtest002, cephtest003, cephtest004
mon_host = 10.3.163.196,10.3.163.92,10.3.163.115,10.3.163.113
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 10.3.163.0/24
cluster network = 10.3.163.0/24
[cephadmin@cephtest001 cephcluster]$

5.9 推送 修改/home/cephadmin/cephcluster/ceph.conf 到四台节点(部署节点执行)

[cephadmin@cephtest001 cephcluster]$ ceph-deploy --overwrite-conf admin cephtest001 cephtest002 cephtest003 cephtest004
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy --overwrite-conf admin cephtest001 cephtest002 cephtest003 cephtest004
[ceph_deploy.cli][INFO  ] ceph-deploy options:

5.10 修改 /etc/ceph 目录权限(所有节点执行)

[cephadmin@cephtest004 ~]$ sudo chown -R cephadmin:cephadmin /etc/ceph

5.11 添加新节点osd

方法同单个节点

检查:

	[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_WARN
            noin,nobackfill flag(s) set

  services:
    mon: 4 daemons, quorum cephtest002,cephtest001,cephtest003,cephtest004 (age 27m)
    mgr: cephtest001(active, since 57m), standbys: cephtest002, cephtest003, cephtest004
    osd: 10 osds: 10 up (since 12s), 7 in (since 61m); 1 remapped pgs
         flags noin,nobackfill
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   4 pools, 128 pgs
    objects: 187 objects, 1.2 KiB
    usage:   10 GiB used, 20 TiB / 20 TiB avail
    pgs:     127 active+clean
             1   active+clean+remapped

  io:
    recovery: 0 B/s, 2 objects/s

[cephadmin@cephtest001 cephcluster]$

5.12 关闭noin nonobackfill (部署节点 业务低峰执行)

[cephadmin@cephtest001 cephcluster]$ ceph osd unset noin
noin is unset
[cephadmin@cephtest001 cephcluster]$ ceph osd unset nobackfill
nobackfill is unset
[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_OK

  services:
    mon: 4 daemons, quorum cephtest002,cephtest001,cephtest003,cephtest004 (age 30m)
    mgr: cephtest001(active, since 60m), standbys: cephtest002, cephtest003, cephtest004
    osd: 10 osds: 10 up (since 3m), 7 in (since 64m); 1 remapped pgs
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   4 pools, 128 pgs
    objects: 187 objects, 1.2 KiB
    usage:   9.1 GiB used, 16 TiB / 16 TiB avail
    pgs:     127 active+clean
             1   active+clean+remapped


5.13 ceph osd 加入集群

[cephadmin@cephtest001 cephcluster]$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME            STATUS REWEIGHT PRI-AFF
-1       20.00764 root default
-3        5.45547     host cephtest001
 0   hdd  1.09109         osd.0            up  1.00000 1.00000
 1   hdd  1.09109         osd.1            up  1.00000 1.00000
 2   hdd  1.09109         osd.2            up  1.00000 1.00000
 3   hdd  1.09109         osd.3            up  1.00000 1.00000
 4   hdd  1.09109         osd.4            up  1.00000 1.00000
-5        1.81879     host cephtest002
 5   hdd  1.81879         osd.5            up  1.00000 1.00000
-7        1.81879     host cephtest003
 6   hdd  1.81879         osd.6            up  1.00000 1.00000
-9       10.91460     host cephtest004
 7   hdd  3.63820         osd.7            up        0 1.00000
 8   hdd  3.63820         osd.8            up        0 1.00000
 9   hdd  3.63820         osd.9            up        0 1.00000
[cephadmin@cephtest001 cephcluster]$ ceph osd in 7
marked in osd.7.
[cephadmin@cephtest001 cephcluster]$ ceph osd in 8
marked in osd.8.
[cephadmin@cephtest001 cephcluster]$ ceph osd in 9
marked in osd.9.
[cephadmin@cephtest001 cephcluster]$ ceph -s
  cluster:
    id:     76d33994-3ef3-477d-876d-ea2a23eadd17
    health: HEALTH_WARN
            Degraded data redundancy: 55/561 objects degraded (9.804%), 12 pgs degraded

  services:
    mon: 4 daemons, quorum cephtest002,cephtest001,cephtest003,cephtest004 (age 37m)
    mgr: cephtest001(active, since 67m), standbys: cephtest002, cephtest003, cephtest004
    osd: 10 osds: 10 up (since 11m), 10 in (since 2s)
    rgw: 1 daemon active (cephtest004)

  task status:

  data:
    pools:   4 pools, 128 pgs
    objects: 187 objects, 1.2 KiB
    usage:   9.1 GiB used, 16 TiB / 16 TiB avail
    pgs:     25.781% pgs not active
             55/561 objects degraded (9.804%)
             5/561 objects misplaced (0.891%)
             80 active+clean
             33 peering
             11 active+recovery_wait+degraded
             3  active+recovery_wait
             1  active+recovering+degraded

  io:
    recovery: 0 B/s, 3 objects/s

[cephadmin@cephtest001 cephcluster]$

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值