linux (centos 7.6)生产环境基于三台物理机 安装 ceph 集群

零 修订记录

序号修订内容修订时间
1新增20210423

一 摘要

本文介绍centos7.6 使用ceph-deploy 安装ceph nautilus 版本,本文主要基于生产环境部署ceph,尤其在网路层面做了冗余配置。

二 环境信息

(一)硬件信息

2.1.1 服务器信息

主机名品牌型号机器配置数量
proceph01.pro.kxdigit.com浪潮 SA5212M542102/128G/SSD:240G2 960G2 /SAS:8T 7.2K 6 /10G X7102/1G PHY卡1/RAID卡 SAS3108 2GB1
proceph02.pro.kxdigit.com浪潮 SA5212M542102/128G/SSD:240G2 960G2 /SAS:8T 7.2K 6 /10G X7102/1G PHY卡1/RAID卡 SAS3108 2GB1
proceph03.pro.kxdigit.com浪潮 SA5212M542102/128G/SSD:240G2 960G2 /SAS:8T 7.2K 6 /10G X7102/1G PHY卡1/RAID卡 SAS3108 2GB1

2.1.2 交换机信息

两台相同配置的交换机配置堆叠。

交换机名称品牌型号机器配置数量
A3_1F_DC_openstack_test_jieru_train-irf_b02&b03H3CLS-6860-54HF10G 光口48,40g 光口62

(二)操作系统

操作系统使用centos 7.6.1810 64 位

[root@localhost vlan]# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)
[root@localhost vlan]#

(三)ceph 信息

三 实施

(一)部署规划

3.1.1 部署网络规划

主机端物理接口网卡名称绑定IP地址交换机接口绑定模式VLAN备注
proceph01万兆光口1enp59s0f1mode4bond0:10.3.140.31B02.40U7BAGG7/LACPaccess140API管理
proceph01万兆光口3enp175s0f1mode4B03.40U7BAGG7/LACPaccess140API管理
proceph01万兆光口2enp59s0f0mode4bond1: 10.3.141.31B02.40U31BAGG31/LACPaccess141存储专用网络
proceph01万兆光口4enp175s0f0mode4B03.40U31BAGG31/LACPaccess141存储专用网络
proceph02万兆光口1enp59s0f1mode4bond0:10.3.140.32B02.40U8BAGG8/LACPaccess140API管理
proceph02万兆光口3enp175s0f1mode4B03.40U8BAGG8/LACPaccess140API管理
proceph02万兆光口2enp59s0f0mode4bond1:10.3.141.32B02.40U32BAGG32/LACPaccess141存储专用网络
proceph02万兆光口4enp175s0f0mode4B03.40U32BAGG32/LACPaccess141存储专用网络
proceph03万兆光口1enp59s0f1mode4bond0:10.3.140.33B02.40U9BAGG9/LACPaccess140API管理
proceph03万兆光口3enp175s0f1mode4B03.40U9BAGG9/LACPaccess140API管理
proceph03万兆光口2enp59s0f0mode4bond1:10.3.141.33B02.40U33BAGG33/LACPaccess141存储专用网络
proceph03万兆光口4enp175s0f0mode4B03.40U33BAGG33/LACPaccess141存储专用网络

3.1.2 部署节点功能规划

主机名IP磁盘角色
proceph01.pro.kxdigit.com10.3.140.31系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdgceph-deploy,monitor,mgr,mds,osd
proceph02.pro.kxdigit.com10.3.140.32系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdgmonitor,mgr,mds,osd
proceph03.pro.kxdigit.com10.3.140.33系统盘:/dev/sda 数据盘:/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdgmonitor,mgr,mds,osd

3.1.3 raid 特别说明

系统盘做raid1,数据盘 每张盘单独做raid0,共六张数据盘,做六次raid0;

(二)部署准备(三台节点都需实施)

3.2.1-3.2.5 详细操作请参考linux 基于三台物理机安装ceph nautilus
linux (centos7) 使用ceph-deploy 安装ceph

3.2.1 配置bond0

[参考该文](https://www.cnblogs.com/weiwei2021/p/14690254.html)

3.2.2 配置bond1

同上

3.2.3 关闭动态路由

机器配置双地址后,如果不关闭动态路由,则只能对外使用一个地址。即路由表里第一条默认路由对应的地址。

echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter

echo 0 > /proc/sys/net/ipv4/conf/bond0/rp_filter

echo 0 > /proc/sys/net/ipv4/conf/bond1/rp_filter

永久关闭动态路由

[root@localhost etc]# cp /etc/sysctl.conf /etc/sysctl.conf.bak.orig
[root@localhost etc]# vim /etc/sysctl.conf


# close dynamic route for 2 IP

net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.bond0.rp_filter = 0
net.ipv4.conf.bond1.rp_filter = 0

3.2.4 配置dns

基于ansible-playbook 完成

[dev@10-3-170-32 base]$ ansible-playbook modifydns.yml

dns 服务器上配置dns

域名解析地址
proceph01.pro.kxdigit.com10.3.140.31
proceph02.pro.kxdigit.com10.3.140.32
proceph03.pro.kxdigit.com10.3.140.33

3.2.5 修改ssh 配置文件

因为配置了dns,默认ssh 登录时会用到dns,这样ssh 登录时会很慢,

[root@localhost ssh]# cp sshd_config sshd_config.bak.orig
[root@localhost ssh]# vim sshd_config
[root@localhost ssh]# systemctl restart sshd
[root@localhost ssh]#

关闭默认即可

#UseDNS yes
UseDNS no

3.2.6 配置yum 源

基于ansible-playbook 完成
更新操作系统源

[dev@10-3-170-32 base]$ ansible-playbook updateyum.yml

更新ceph 源

[dev@10-3-170-32 base]$ ansible-playbook updatecephyum.yml

3.2.4 配置时间服务器

基于ansible-playbook 完成

[dev@10-3-170-32 base]$ ansible-playbook modifychronyclient.yml

3.2.5 配置hosts 文件

/etc/hosts 文件 新增如下配置

10.3.140.31 proceph01
10.3.140.32 proceph02
10.3.140.33 proceph03

3.2.5 关闭防火墙关闭selinux

[dev@10-3-170-32 base]$ ansible-playbook closefirewalldandselinux.yml

3.2.6 设置机器名

[root@localhost ~]#  hostnamectl set-hostname proceph01.pro.kxdigit.com
[root@localhost ~]# exit
登出
Connection to 10.3.140.31 closed.
[dev@10-3-170-32 base]$ ssh root@10.3.140.32
Last login: Fri Apr 23 16:37:32 2021 from 10.3.170.32
[root@localhost ~]# hostnamectl set-hostname proceph02.pro.kxdigit.com
[root@localhost ~]# exit
登出
Connection to 10.3.140.32 closed.
[dev@10-3-170-32 base]$ ssh root@10.3.140.33
Last login: Fri Apr 23 16:37:32 2021 from 10.3.170.32
[root@localhost ~]# hostnamectl set-hostname proceph03.pro.kxdigit.com
[root@localhost ~]# exit

3.2.7创建部署用户cephadmin

三台节点都要创建该用户,并设置sudo

[root@proceph01 ~]# useradd cephadmin
[root@proceph01 ~]# echo "cephnau@2020" | passwd --stdin cephadmin
更改用户 cephadmin 的密码 。
passwd:所有的身份验证令牌已经成功更新。
[root@proceph01 ~]# echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
cephadmin ALL = (root) NOPASSWD:ALL
[root@proceph01 ~]# chmod 0440 /etc/sudoers.d/cephadmin
[root@proceph01 ~]#

3.2.8 配置cephadmin 用户免密登录

部署节点需要免密登录到三台节点上,我这里部署节点跟节点001 是同一台机器,也用做免密哦

[root@proceph01 ~]# su - cephadmin
[cephadmin@proceph01 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephadmin/.ssh/id_rsa):
Created directory '/home/cephadmin/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephadmin/.ssh/id_rsa.
Your public key has been saved in /home/cephadmin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:/N1IGwJzKLKEEvnIqbnz4BaVMqSe2jx3SsfBaCHSDG4 cephadmin@proceph01.pro.kxdigit.com
The key's randomart image is:
+---[RSA 2048]----+
|o.               |
|o* .     .       |
|*E* = . + .      |
|+B.= * o +       |
|o.= + o S . o    |
|o+ . . . . + =   |
|o+. . o   . + .  |
|=o+....          |
|.+.o.o           |
+----[SHA256]-----+
[cephadmin@proceph01 ~]$ ssh-copy-id proceph01.pro.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'proceph01.pro.kxdigit.com (10.3.140.31)' can't be established.
ECDSA key fingerprint is SHA256:IDIkIjgVg6mimwePYirWVtNu6XN34kDpeWhcUqLn7bo.
ECDSA key fingerprint is MD5:6a:2c:8e:d3:57:32:57:7e:10:4c:2f:84:c5:a2:5e:ab.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@proceph01.pro.kxdigit.com's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'proceph01.pro.kxdigit.com'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@proceph01 ~]$ ssh-copy-id cephadmin@proceph01.pro.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
                (if you think this is a mistake, you may want to use -f option)

[cephadmin@proceph01 ~]$ ssh-copy-id cephadmin@proceph02.pro.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'proceph02.pro.kxdigit.com (10.3.140.32)' can't be established.
ECDSA key fingerprint is SHA256:0UefKLdjPASb5QOcZtvQ0P0ed1nxlwJL9tVqjalBKO8.
ECDSA key fingerprint is MD5:15:1d:05:62:f3:1e:38:71:1a:f8:58:56:08:bf:39:b9.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@proceph02.pro.kxdigit.com's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@proceph02.pro.kxdigit.com'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@proceph01 ~]$ ssh-copy-id cephadmin@proceph03.pro.kxdigit.com
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cephadmin/.ssh/id_rsa.pub"
The authenticity of host 'proceph03.pro.kxdigit.com (10.3.140.33)' can't be established.
ECDSA key fingerprint is SHA256:fkkrIhBYdiU2YixiBKQn6f8cr72F4MdlydFk7o5luNU.
ECDSA key fingerprint is MD5:e8:9c:85:bb:01:e5:3e:d8:20:86:50:5f:5a:f2:f9:80.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cephadmin@proceph03.pro.kxdigit.com's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cephadmin@proceph03.pro.kxdigit.com'"
and check to make sure that only the key(s) you wanted were added.

[cephadmin@proceph01 ~]$

(三)部署ceph

3.3.1 所有节点安装ceph

三个节点都需要安装

[cephadmin@proceph02 ~]$ sudo yum -y install ceph ceph-radosgw
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

3.3.2 部署节点安装ceph-deploy

在部署节点ceph01 上使用cephadmin 用户安装ceph-deploy

[root@proceph01 ~]# su - cephadmin
上一次登录:五 4月 23 16:59:30 CST 2021pts/0 上
[cephadmin@proceph01 ~]$ sudo yum -y install ceph-deploy python-pip
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
ceph                                                                                                      | 2.9 kB  00:00:00
ceph/primary_db                                                                                           |  87 kB  00:00:00
Resolving Dependencies
--> Running transaction check
---> Package ceph-deploy.noarch 0:2.0.1-0 will be installed
---> Package python2-pip.noarch 0:8.1.2-12.el7 will be installed

[cephadmin@proceph01 ~]$ ceph-deploy --version
2.0.1
[cephadmin@proceph01 ~]$

3.3.3 部署ceph 集群

在ceph-deploy 部署节点 操作

3.3.3 安装ceph软件

部署节点上 cephadmin 用户操作

[cephadmin@proceph01 cephcluster]$ ceph-deploy new proceph01 proceph02 proceph03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy new proceph01 proceph02 proceph03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f665c92b230>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f665c947e18>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['proceph01', 'proceph02', 'proceph03']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None

并生成以下配置文件

[cephadmin@proceph01 cephcluster]$ ll
total 20
-rw-rw-r--. 1 cephadmin cephadmin  244 Apr 23 17:44 ceph.conf
-rw-rw-r--. 1 cephadmin cephadmin 9268 Apr 23 17:44 ceph-deploy-ceph.log
-rw-------. 1 cephadmin cephadmin   73 Apr 23 17:44 ceph.mon.keyring
[cephadmin@proceph01 cephcluster]$

PS:
ceph-deploy –cluster {cluster-name} new node1 node2 //创建一个自定集群名称的ceph集群,默
认为 ceph

修改ceph.conf 新增网络配置

[global]
fsid = ad0bf159-1b6f-472b-94de-83f713c339a3
mon_initial_members = proceph01, proceph02, proceph03
mon_host = 10.3.140.31,10.3.140.32,10.3.140.33
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 10.3.140.0/24

cluster network = 10.3.141.0/24

cluster network 最好使用光纤网络

3.3.5 集群配置初始化,生成所有密钥

部署节点执行

[cephadmin@proceph01 cephcluster]$  ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mon create-initial

生成的秘钥

[cephadmin@proceph01 cephcluster]$ ls -al
total 88
drwxrwxr-x. 2 cephadmin cephadmin   270 Apr 23 17:58 .
drwx------. 7 cephadmin cephadmin   199 Apr 23 17:49 ..
-rw-------. 1 cephadmin cephadmin   113 Apr 23 17:58 ceph.bootstrap-mds.keyring
-rw-------. 1 cephadmin cephadmin   113 Apr 23 17:58 ceph.bootstrap-mgr.keyring
-rw-------. 1 cephadmin cephadmin   113 Apr 23 17:58 ceph.bootstrap-osd.keyring
-rw-------. 1 cephadmin cephadmin   113 Apr 23 17:58 ceph.bootstrap-rgw.keyring
-rw-------. 1 cephadmin cephadmin   151 Apr 23 17:58 ceph.client.admin.keyring
-rw-rw-r--. 1 cephadmin cephadmin   308 Apr 23 17:49 ceph.conf
-rw-rw-r--. 1 cephadmin cephadmin   244 Apr 23 17:47 ceph.conf.bak.orig
-rw-rw-r--. 1 cephadmin cephadmin 56416 Apr 23 17:58 ceph-deploy-ceph.log
-rw-------. 1 cephadmin cephadmin    73 Apr 23 17:44 ceph.mon.keyring
[cephadmin@proceph01 cephcluster]$

3.3.6 配置信息分发到各节点

[cephadmin@proceph01 cephcluster]$ ceph-deploy admin proceph01 proceph02 proceph03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy admin proceph01 proceph02 proceph03

切换到root

[cephadmin@proceph01 cephcluster]$ su -
Password:
Last login: Fri Apr 23 17:11:56 CST 2021 from 10.3.170.32 on pts/0
Last failed login: Fri Apr 23 18:01:55 CST 2021 on pts/0
There was 1 failed login attempt since the last successful login.
[root@proceph01 ~]# ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 3m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[root@proceph01 ~]#


[root@proceph02 ~]# ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 4m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[root@proceph02 ~]# exit
logout
Connection to proceph02 closed.
[root@proceph01 ~]# exit
logout
[cephadmin@proceph01 cephcluster]$ ssh proceph03
Last login: Fri Apr 23 17:56:35 2021 from 10.3.140.31
[cephadmin@proceph03 ~]$ sudo ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@proceph03 ~]$

如果想使用cephadmin 账号执行ceph -s,则需要修改/etc/ceph 目录权限

[cephadmin@proceph01 cephcluster]$ sudo chown -R cephadmin:cephadmin /etc/ceph
[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 7m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@proceph01 cephcluster]$

三个节点都需要执行 sudo chown -R cephadmin:cephadmin /etc/ceph

3.3.7 配置osd

cephadmin 用户在部署节点执行

三台节点都需要操作,可以直接在部署节点上使用命令。
首先通过lsblk 看各节点上硬盘情况,然后通过
for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
do
ceph-deploy disk zap proceph01 $dev
ceph-deploy osd create proceph01 --data $dev
done
添加osd

3.3.7.1 proceph01 添加osd
3.3.7.1.1首先查看硬盘名称
[cephadmin@proceph01 ~]$ lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 223.1G  0 disk
├─sda1            8:1    0   200M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0   221G  0 part
  ├─centos-root 253:0    0   175G  0 lvm  /
  ├─centos-swap 253:1    0    16G  0 lvm  [SWAP]
  └─centos-home 253:2    0    30G  0 lvm  /home
sdb               8:16   0   7.3T  0 disk
sdc               8:32   0   7.3T  0 disk
sdd               8:48   0   7.3T  0 disk
sde               8:64   0   7.3T  0 disk
sdf               8:80   0   7.3T  0 disk
sdg               8:96   0   7.3T  0 disk
[cephadmin@proceph01 ~]$

3.3.7.1.1 proceph01 节点添加osd

在该目录下执行:/home/cephadmin/cephcluster

[cephadmin@proceph01 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@proceph01 cephcluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> do
> ceph-deploy disk zap proceph01 $dev
> ceph-deploy osd create proceph01 --data $dev
> done
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk zap proceph01 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:

检查
可见新增了6块osd

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 2h)
    mgr: no daemons active
    osd: 6 osds: 6 up (since 51s), 6 in (since 51s)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@proceph01 cephcluster]$

3.3.7.1.1 proceph02 节点添加osd

部署节点执行

首先登录到proceph02 检查硬盘数量

[cephadmin@proceph02 ~]$ lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 223.1G  0 disk
├─sda1            8:1    0   200M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0   221G  0 part
  ├─centos-root 253:0    0   175G  0 lvm  /
  ├─centos-swap 253:1    0    16G  0 lvm  [SWAP]
  └─centos-home 253:2    0    30G  0 lvm  /home
sdb               8:16   0   7.3T  0 disk
sdc               8:32   0   7.3T  0 disk
sdd               8:48   0   7.3T  0 disk
sde               8:64   0   7.3T  0 disk
sdf               8:80   0   7.3T  0 disk
sdg               8:96   0   7.3T  0 disk
[cephadmin@proceph02 ~]$

然后再部署节点/home/cephadmin/cephcluster 目录执行

[cephadmin@proceph01 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@proceph01 cephcluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> do
> ceph-deploy disk zap proceph02 $dev
> ceph-deploy osd create proceph02 --data $dev
> done
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk zap proceph02 /dev/sdb

检查

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5h)
    mgr: no daemons active
    osd: 12 osds: 12 up (since 25m), 12 in (since 25m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@proceph01 cephcluster]$

3.3.7.1.2 proceph03 节点添加osd

节点三检查新增硬盘

[cephadmin@proceph03 ~]$ lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 223.1G  0 disk
├─sda1            8:1    0   200M  0 part /boot/efi
├─sda2            8:2    0     1G  0 part /boot
└─sda3            8:3    0   221G  0 part
  ├─centos-root 253:0    0   175G  0 lvm  /
  ├─centos-swap 253:1    0    16G  0 lvm  [SWAP]
  └─centos-home 253:2    0    30G  0 lvm  /home
sdb               8:16   0   7.3T  0 disk
sdc               8:32   0   7.3T  0 disk
sdd               8:48   0   7.3T  0 disk
sde               8:64   0   7.3T  0 disk
sdf               8:80   0   7.3T  0 disk
sdg               8:96   0   7.3T  0 disk
[cephadmin@proceph03 ~]$

回到部署节点执行新增osd

[cephadmin@proceph01 cephcluster]$ for dev in /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> do
> ceph-deploy disk zap proceph03 $dev
> ceph-deploy osd create proceph03 --data $dev
> done
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk zap proceph03 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:

检查

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5h)
    mgr: no daemons active
    osd: 18 osds: 18 up (since 18s), 18 in (since 18s)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[cephadmin@proceph01 cephcluster]$

3.3.8 部署mgr

部署节点执行


[cephadmin@proceph01 cephcluster]$ ceph-deploy mgr create proceph01 proceph02 proceph03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mgr create proceph01 proceph02 proceph03

检查

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5h)
    mgr: proceph01(active, since 24s), standbys: proceph02, proceph03
    osd: 18 osds: 18 up (since 2m), 18 in (since 2m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   18 GiB used, 131 TiB / 131 TiB avail
    pgs:

[cephadmin@proceph01 cephcluster]$

3.3.9 安装mgr-dashboard(三台节点都需要安装)

在三台节点都安装,但是目前只在主节点开启。
直接使用yum 安装,下面是proceph01安装示例,proceph02 proceph03都需要安装。

[cephadmin@proceph01 cephcluster]$ sudo yum install ceph-mgr-dashboard
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

3.3.10 开启mgr-dashboard(主节点开启)

[cephadmin@proceph01 cephcluster]$ ceph -s
  cluster:
    id:     ad0bf159-1b6f-472b-94de-83f713c339a3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum proceph01,proceph02,proceph03 (age 5h)
    mgr: proceph01(active, since 94s), standbys: proceph02, proceph03
    osd: 18 osds: 18 up (since 6m), 18 in (since 6m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   18 GiB used, 131 TiB / 131 TiB avail
    pgs:

[cephadmin@proceph01 cephcluster]$

mgr: proceph01(active, since 94s), standbys: proceph02, proceph03

所以在proceph01 开启

[cephadmin@proceph01 cephcluster]$ ceph mgr module enable dashboard
[cephadmin@proceph01 cephcluster]$ ceph dashboard create-self-signed-cert
Self-signed certificate created
[cephadmin@proceph01 cephcluster]$ ceph dashboard set-login-credentials admin admin
******************************************************************
***          WARNING: this command is deprecated.              ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated
[cephadmin@proceph01 cephcluster]$

然后登录 https://10.3.170.31:8443
账号密码 admin admin 即可。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值