Ceph (2) - 安装Ceph集群方法 2:使用cephadm配置Nautilus版Ceph集群

OpenShift 4.x HOL教程汇总


说明:社区Ceph从octopus版开始不支持通过ceph-deploy安装。新的安装工具是cephadm,它是将Ceph集群的各种服务部署运行在容器中。本文将介绍如何用cephadm安装Nautilus版Ceph集群。

安装环境说明

Ceph集群节点说明

本文使用cephadm工具部署社区版Ceph集群,Ceph使用的是octopus版本。Ceph集群中的mon、mgr和osd部署在以下3个虚机节点上。

  • ceph-node1(角色:mon/mgr/osd)
  • ceph-node2(角色:mon/mgr/osd)
  • ceph-node3(角色:mon/osd)

Ceph集群主机环境说明

本文将使用基于VirtualBox VM虚机模拟物理服务。在配置VM的时候,请为运行Ceph集群的3个虚机,并分别配置2个额外的存储磁盘(如下图的sdb和sdc盘),可以为每个磁盘分配50GB存储空间。
在这里插入图片描述
为每个VM分配2个网卡,其中一个配置成Bridge类型、一个配置成Host-Only类型,分别配置负责Ceph集群外部访问的public地址段(本文使用的是“192.168.1.0”)和集群内部通讯用的private地址段(本文使用的是“192.168.99.0”)。
在这里插入图片描述
在创建完VM后,请为以上所有节点最小化安装CentOS7.x或RHEL 7.x。在安装RHEL过程中按照以下列表为每个节点分配主机名和固定IP地址。

  • ceph-node1:192.168.1.201/24、192.168.99.201/24
  • ceph-node2:192.168.1.202/24、192.168.99.202/24
  • ceph-node3:192.168.1.203/24、192.168.99.203/24

由于在VirtualBox的VM中2个网卡缺省名为“enp0s3”和“enp0s8”,因此可使用以下命令设置VM的2个IP地址。

nmcli con modify enp0s3 ipv4.addresses 192.168.1.XXX/24
nmcli con modify enp0s8 ipv4.addresses 192.168.99.XXX/24
systemctl restart network

用cephadm部署Ceph集群

说明:下文如果没有特别说明,则命令操作都是在ceph-node1节点中使用root用户执行。

准备节点环境

设置环境变量

执行以下命令,将常用参数写入环境变量。

$ cat >> ~/.bashrc << EOF
CEPH_VER=octopus					## 安装Ceph的版本
PUBLIC_SUBNET=192.168.1				## 外部访问地址段
PRIVATE_SUBNET=192.168.99			## 内部通讯地址段
CEPH_NODE1=ceph-node1				## Ceph节点1
CEPH_NODE2=ceph-node2				## Ceph节点2
CEPH_NODE3=ceph-node3				## Ceph节点3
CEPH_NODE_LIST="ceph-node1 ceph-node2 ceph-node3"  	## Ceph 集群所有节点
EOF

$ source ~/.bashrc

设置hosts

本文 使简化操作没有使用DNS,而是通过hosts解析4个虚拟主机域名。如果您的环境中有DNS服务,则可省略本步。
执行以下命令,将主机名写入hosts文件中,并同步到其它3个主机中。

$ cat >> /etc/hosts << EOF
$PUBLIC_SUBNET.201   $CEPH_NODE1
$PUBLIC_SUBNET.202   $CEPH_NODE2
$PUBLIC_SUBNET.203   $CEPH_NODE3
EOF

$ for i in $CEPH_NODE_LIST; do scp /etc/hosts root@${i}:/etc/hosts; done

生成证书+免密登录

执行命令生成密钥对,并将公钥复制到所有节点。

$ ssh-keygen -t rsa -b 2048 -P '' -f ~/.ssh/id_rsa
$ for i in $CEPH_NODE_LIST; do ssh-copy-id root@${i} -f; done

设置节点主机名

执行命令,设置4个节点的主机名。

$ for i in $CEPH_NODE_LIST; do ssh ${i} hostnamectl set-hostname ${i}; ssh ${i} hostname; done

在执行完后需要退出所有节点,然后重新登录。

关闭防火墙并禁用SELinux

为了省去网络访问限制麻烦,执行以下命令关闭所有节点的防火墙并禁用SELinux(建议生产环境还是开启它们)。

$ for i in $CEPH_NODE_LIST; do ssh ${i} systemctl stop firewalld; ssh ${i} systemctl disable firewalld; done
$ for i in $CEPH_NODE_LIST; do ssh ${i} sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config; ssh ${i} setenforce 0; done

配置Yum Repo

执行命令,创建repo文件和yum源。

$ cat > /etc/yum.repos.d/CentOS-Base.repo << EOF
[base]
name=CentOS Base
baseurl=http://mirrors.aliyun.com/centos/7/os/x86_64/
gpgcheck=0

[updates]
name=CentOS Updates
baseurl=http://mirrors.aliyun.com/centos/7/updates/x86_64/
gpgcheck=0

[extras]
name=CentOS Extras
baseurl=http://mirrors.aliyun.com/centos/7/extras/x86_64/
gpgcheck=0
EOF

$ for i in $CEPH_NODE_LIST; do scp /etc/yum.repos.d/CentOS-Base.repo root@${i}:/etc/yum.repos.d/; done 
 
$ cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=Extra Packages for Enterprise Linux 7
baseurl=http://mirrors.aliyun.com/epel/7/x86_64
gpgcheck=0
EOF
 
$ for i in $CEPH_NODE_LIST; do scp /etc/yum.repos.d/epel.repo root@${i}:/etc/yum.repos.d/; done 
 
$ cat > /etc/yum.repos.d/ceph.repo << EOF
[Ceph]
name=Ceph packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-$CEPH_VER/el7/x86_64/
gpgcheck=0

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-$CEPH_VER/el7/noarch/
gpgcheck=0
EOF

$ for i in $CEPH_NODE_LIST; do scp /etc/yum.repos.d/ceph.repo root@${i}:/etc/yum.repos.d/; done

安装配置NTP服务

执行命令安装NTP服务,并在3个虚机几点中以ceph-node1节点为主NTP。

$ for i in $CEPH_NODE_LIST;  do ssh ${i} yum install ntp -y; done

$ cat > ~/ntp.conf << EOF
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
server $CEPH_NODE1 iburst
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
EOF

$ for i in $CEPH_NODE_LIST; do scp ~/ntp.conf root@${i}:/etc/; ssh ${i} systemctl restart ntpd; ssh ${i} systemctl enable ntpd; ssh ${i} ntpq -pn; done

安装podman/python

执行命令,安装podman/python。

$ for i in $CEPH_NODE_LIST;  do ssh ${i} yum install podman python3 -y; done

创建Ceph集群

安装cephadm和ceph程序

执行命令安装cephadm和ceph-common包,其中ceph-common中包含了ceph应用程序。

$ yum install cephadm -y
$ cephadm install ceph-common
$ ceph -v
ceph version 15.2.5 (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)

创建bootstrap,初始化Ceph集群

使用ceph-node1的IP地址作为Ceph集群中第一个mon来初始化Ceph集群的bootstrap。这个过程会下载“docker.io/ceph/ceph:v15”容器,并在ceph-node1上部署mon和mgr服务。

[root@ceph-node1 ceph]$ cephadm bootstrap --mon-ip 192.168.1.201
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit chronyd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/podman) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit chronyd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: fb4db3f0-165a-11eb-ace6-080027fa4db3
INFO:cephadm:Verifying IP 192.168.1.201 port 3300 ...
INFO:cephadm:Verifying IP 192.168.1.201 port 6789 ...
INFO:cephadm:Mon IP 192.168.1.201 is in CIDR network 192.168.1.0/24
INFO:cephadm:Pulling container image docker.io/ceph/ceph:v15...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:mon is available
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Verifying port 9283 ...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:mgr not available, waiting (4/10)...
INFO:cephadm:mgr is available
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Mgr epoch 5 is available
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost's authorized_keys...
INFO:cephadm:Adding host ceph-node1...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Mgr epoch 13 is available
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:

             URL: https://ceph-node1:8443/
            User: admin
        Password: mgf25mieq5

INFO:cephadm:You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid 155b61f2-16a1-11eb-92ea-080027f49613 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.

使用上一步的生成的URL、User/Password访问“Ceph Dashboard”,登录控制台后需要修改初始化密码。此时控制台状态如下图:
在这里插入图片描述
执行命令,查看ceph-node1节点上运行的容器。

[root@bogon ceph]# podman ps
CONTAINER ID  IMAGE                                COMMAND               CREATED         STATUS             PORTS  NAMES
0bd45319e177  docker.io/ceph/ceph:v15              -n client.crash.c...  33 seconds ago  Up 32 seconds ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-crash.ceph-node1
5d8a75882aee  docker.io/prom/alertmanager:v0.20.0  --config.file=/et...  34 seconds ago  Up 34 seconds ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-alertmanager.ceph-node1
c4185c3f4727  docker.io/ceph/ceph:v15              -n mgr.ceph-node1...  3 minutes ago   Up 3 minutes ago          ceph-155b61f2-16a1-11eb-92ea-080027f49613-mgr.ceph-node1.eviqce
488c1ae77ddb  docker.io/ceph/ceph:v15              -n mon.ceph-node1...  3 minutes ago   Up 3 minutes ago          ceph-155b61f2-16a1-11eb-92ea-080027f49613-mon.ceph-node1

获取本Ceph集群的ID。确认该FSID出现在运行Ceph集群的容器名称中。

$ FSID=$(ceph fsid)

执行命令,查看ceph-node1节点上运行mon服务的容器中的“/etc/ceph/ceph.conf”文件内容。

$ podman exec -it ceph-${FSID}-mon.ceph-node1 cat /etc/ceph/ceph.conf
[global]
        fsid = 155b61f2-16a1-11eb-92ea-080027f49613
        mon_host = [v2:192.168.1.201:3300/0,v1:192.168.1.201:6789/0]

查看ceph-node1节点上的“/etc/ceph/ceph.conf”文件,确认它和mon容器中的文件内容一致。这说明运行mon服务的容器挂载了ceph-node1节点的“/etc/ceph/ceph.conf”文件。

$ ls -al /etc/ceph
total 24
drwxr-xr-x   2 root root   72 Oct 24 23:17 .
drwxr-xr-x. 82 root root 8192 Oct 24 23:13 ..
-rw-------   1 root root   63 Oct 24 23:17 ceph.client.admin.keyring
-rw-r--r--   1 root root  177 Oct 24 23:17 ceph.conf
-rw-r--r--   1 root root  595 Oct 24 23:17 ceph.pub

$ cat ceph.conf
[global]
        fsid = 155b61f2-16a1-11eb-92ea-080027f49613
        mon_host = [v2:192.168.1.201:3300/0,v1:192.168.1.201:6789/0]

执行命令,查看Ceph集节状态。确认此时Ceph集群中有1个运行在ceph-node1节点上的mon服务和mgr服务,另外还有1个pool和pg。

$ ceph -s
  cluster:
    id:     155b61f2-16a1-11eb-92ea-080027f49613
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum ceph-node1 (age 5m)
    mgr: ceph-node1.eviqce(active, since 4m)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             1 unknown

查看Ceph集群包含的相关主机节点。

$ ceph orch host ls
HOST        ADDR        LABELS  STATUS
ceph-node1  ceph-node1

查看Ceph集群中当前包含的存储设备。由于此时Ceph集群中只有ceph-node1节点,因此可供Ceph集群使用的存储(AVAIL的状态为True)只有ceph-node1的“/dev/sdb”和“/dev/sdc”。

$ ceph orch device ls
HOST        PATH      TYPE   SIZE  DEVICE_ID                          MODEL          VENDOR  ROTATIONAL  AVAIL  REJECT REASONS
ceph-node1  /dev/sdb  hdd   50.0G  VBOX_HARDDISK_VB62450b24-d6839103  VBOX HARDDISK  ATA     1           True
ceph-node1  /dev/sdc  hdd   50.0G  VBOX_HARDDISK_VB96532a30-2ba438fb  VBOX HARDDISK  ATA     1           True
ceph-node1  /dev/sda  hdd    100G  VBOX_HARDDISK_VBfed79a0b-57e4d6b6  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked

向Ceph集群追加mon节点

执行命令,将生成的Ceph集群节点之间通讯的公钥复制给ceph-node2和ceph-node3。

$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node2
$ ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node3

关闭自动向新加入Ceph集群的节点部署mon服务的功能。

$ ceph orch apply mon --unmanaged

将ceph-node2和ceph-node3节点加入到Ceph集群。

$ ceph orch host add ceph-node2
$ ceph orch host add ceph-node3
$ ceph orch host ls
HOST        ADDR        LABELS  STATUS
ceph-node1  ceph-node1
ceph-node2  ceph-node2
ceph-node3  ceph-node3

对新加入的ceph-node2和ceph-node3节点打mon标签。

$ ceph orch host label add ceph-node2 mon
$ ceph orch host label add ceph-node3 mon

查看Ceph集群中的主机节点。

$ ceph orch host ls
HOST        ADDR        LABELS  STATUS
ceph-node1  ceph-node1  
ceph-node2  ceph-node2  mon
ceph-node3  ceph-node3  mon

查看Ceph集群中的mon服务,确认有3个mon服务,而ceph-node1节点上的是“leader”。

$ ceph mon stat
e3: 3 mons at {ceph-node1=[v2:192.168.1.201:3300/0,v1:192.168.1.201:6789/0],ceph-node2=[v2:192.168.1.202:3300/0,v1:192.168.1.202:6789/0],ceph-node3=[v2:192.168.1.203:3300/0,v1:192.168.1.203:6789/0]}, election epoch 14, leader 0 ceph-node1, quorum 0,1,2 ceph-node1,ceph-node3,ceph-node2 

$ ceph mon dump
dumped monmap epoch 3
epoch 3
fsid 155b61f2-16a1-11eb-92ea-080027f49613
last_changed 2020-10-25T09:30:44.346289+0000
created 2020-10-25T09:06:38.086579+0000
min_mon_release 15 (octopus)
0: [v2:192.168.1.201:3300/0,v1:192.168.1.201:6789/0] mon.ceph-node1
1: [v2:192.168.1.203:3300/0,v1:192.168.1.203:6789/0] mon.ceph-node3
2: [v2:192.168.1.202:3300/0,v1:192.168.1.202:6789/0] mon.ceph-node2

向Ceph集群添加osd服务

执行命令查看Ceph集群状态,其中当前集群中为“0”个osd服务。

$ ceph -s
  cluster:
    id:     155b61f2-16a1-11eb-92ea-080027f49613
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node3,ceph-node2 (age 15m)
    mgr: ceph-node1.eviqce(active, since 38m), standbys: ceph-node2.tdlmhu
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             1 unknown

执行命令,查看Ceph集群中节点包含的存储,确认集群包括的3个主机节点中每个都是有2个可用的存储。

$ ceph orch device ls
HOST        PATH      TYPE   SIZE  DEVICE_ID                          MODEL          VENDOR  ROTATIONAL  AVAIL  REJECT REASONS
ceph-node1  /dev/sdb  hdd   50.0G  VBOX_HARDDISK_VB62450b24-d6839103  VBOX HARDDISK  ATA     1           True
ceph-node1  /dev/sdc  hdd   50.0G  VBOX_HARDDISK_VB96532a30-2ba438fb  VBOX HARDDISK  ATA     1           True
ceph-node1  /dev/sda  hdd    100G  VBOX_HARDDISK_VBfed79a0b-57e4d6b6  VBOX HARDDISK  ATA     1           False  locked, Insufficient space (<5GB) on vgs, LVM detected
ceph-node3  /dev/sdb  hdd   50.0G  VBOX_HARDDISK_VB580c3f8e-b59de230  VBOX HARDDISK  ATA     1           True
ceph-node3  /dev/sdc  hdd   50.0G  VBOX_HARDDISK_VBcc4f662f-c53df24c  VBOX HARDDISK  ATA     1           True
ceph-node3  /dev/sda  hdd    100G  VBOX_HARDDISK_VB7a23ba58-f4a395cc  VBOX HARDDISK  ATA     1           False  locked, LVM detected, Insufficient space (<5GB) on vgs
ceph-node2  /dev/sdb  hdd   50.0G  VBOX_HARDDISK_VB8cf40977-795b5594  VBOX HARDDISK  ATA     1           True
ceph-node2  /dev/sdc  hdd   50.0G  VBOX_HARDDISK_VB1245f3a7-4fea7b3c  VBOX HARDDISK  ATA     1           True
ceph-node2  /dev/sda  hdd    100G  VBOX_HARDDISK_VB46d390cb-d78a2fa1  VBOX HARDDISK  ATA     1           False  LVM detected, Insufficient space (<5GB) on vgs, locked

执行命令,查看所有Ceph集群节点的存储情况,每个节点都应该有“sdb”和“sdc”两个存储磁盘。

$ for i in $CEPH_NODE_LIST; do ssh $i lsblk; done
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0  100G  0 disk
├─sda1          8:1    0    1G  0 part /boot
└─sda2          8:2    0   99G  0 part
  ├─rhel-root 253:0    0   97G  0 lvm  /
  └─rhel-swap 253:1    0    2G  0 lvm  [SWAP]
sdb             8:16   0   50G  0 disk
sdc             8:32   0   50G  0 disk
sr0            11:0    1 1024M  0 rom
。。。

执行命令,将Ceph集群中的3节点包括的“sdb”和“sdc”磁盘加入到集群osb服务中。

$ for i in $CEPH_NODE_LIST; do ceph orch daemon add osd ${i}:/dev/sdb,/dev/sdc; done

执行以下命令,查看Ceph集群中每个节点上的osd服务运行状态,注意“sdb”和“sdc”。

$ for i in $CEPH_NODE_LIST; do ssh $i lsblk; done
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0  100G  0 disk
├─sda1                                                                                                  8:1    0    1G  0 part /boot
└─sda2                                                                                                  8:2    0   99G  0 part
  ├─rhel-root                                                                                         253:0    0   97G  0 lvm  /
  └─rhel-swap                                                                                         253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                     8:16   0   50G  0 disk
└─ceph--d16739ba--8373--484e--85df--8e75ae62c8ec-osd--block--50b96cba--c256--4672--8a66--ae7252622c04 253:2    0   50G  0 lvm
sdc                                                                                                     8:32   0   50G  0 disk
└─ceph--44692265--8cef--4c6d--b391--ad3098113c4c-osd--block--7d6a97f9--330d--45dd--94b2--485040c1d27f 253:3    0   50G  0 lvm
sr0                                                                                                    11:0    1 1024M  0 rom
。。。

执行以下命令,查看Ceph集群的osd服务状态。从“osd: 6 osds”可确认Ceph集群已经运行6个osd服务了。

$ ceph osd tree
ID CLASS WEIGHT  TYPE NAME           STATUS REWEIGHT PRI-AFF
-1       0.29279 root default
-3       0.09760     host ceph-node1
 0   hdd 0.04880         osd.0           up  1.00000 1.00000
 1   hdd 0.04880         osd.1           up  1.00000 1.00000
-5       0.09760     host ceph-node2
 2   hdd 0.04880         osd.2           up  1.00000 1.00000
 3   hdd 0.04880         osd.3           up  1.00000 1.00000
-7       0.09760     host ceph-node3
 4   hdd 0.04880         osd.4           up  1.00000 1.00000
 5   hdd 0.04880         osd.5           up  1.00000 1.00000

执行命令,查看Ceph集群包含节点的磁盘状态,确认此时3个节点的sdb和sdc磁盘的AVAIL状态都为False,这是由于它们都在使用中的非空磁盘。

[root@ceph-node1 ceph]# ceph orch device ls
HOST        PATH      TYPE   SIZE  DEVICE_ID                          MODEL          VENDOR  ROTATIONAL  AVAIL  REJECT REASONS
ceph-node1  /dev/sda  hdd    100G  VBOX_HARDDISK_VBfed79a0b-57e4d6b6  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked
ceph-node1  /dev/sdb  hdd   50.0G  VBOX HARDDISK_VB62450b24-d6839103  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked
ceph-node1  /dev/sdc  hdd   50.0G  VBOX HARDDISK_VB96532a30-2ba438fb  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked
ceph-node3  /dev/sda  hdd    100G  VBOX_HARDDISK_VB7a23ba58-f4a395cc  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked
ceph-node3  /dev/sdb  hdd   50.0G  VBOX HARDDISK_VB580c3f8e-b59de230  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked
ceph-node3  /dev/sdc  hdd   50.0G  VBOX HARDDISK_VBcc4f662f-c53df24c  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked
ceph-node2  /dev/sda  hdd    100G  VBOX_HARDDISK_VB46d390cb-d78a2fa1  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked
ceph-node2  /dev/sdb  hdd   50.0G  VBOX HARDDISK_VB8cf40977-795b5594  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked
ceph-node2  /dev/sdc  hdd   50.0G  VBOX HARDDISK_VB1245f3a7-4fea7b3c  VBOX HARDDISK  ATA     1           False  Insufficient space (<5GB) on vgs, LVM detected, locked

查看Ceph集群状态,确认Ceph集群中已经包括了运行的6个osd服务。

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     155b61f2-16a1-11eb-92ea-080027f49613
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node3,ceph-node2 (age 53m)
    mgr: ceph-node1.eviqce(active, since 76m), standbys: ceph-node2.tdlmhu
    osd: 6 osds: 6 up (since 20m), 6 in (since 20m)

  data:
    pools:   1 pools, 1 pgs
    objects: 1 objects, 0 B
    usage:   6.0 GiB used, 294 GiB / 300 GiB avail
    pgs:     1 active+clean

查看Ceph集群状态

在浏览器中查看Ceph集群控制台,确认此时为HEALTH_OK状态。
在这里插入图片描述
在ceph-node1节点执行以下命令,查看节点上的镜像和容器。

[root@ceph-node1 ceph]$ podman image list
REPOSITORY                     TAG       IMAGE ID       CREATED         SIZE
docker.io/ceph/ceph            v15       4405f6339e35   5 weeks ago     1 GB
docker.io/ceph/ceph-grafana    6.6.2     a0dce381714a   4 months ago    519 MB
docker.io/prom/prometheus      v2.18.1   de242295e225   5 months ago    141 MB
docker.io/prom/alertmanager    v0.20.0   0881eb8f169f   10 months ago   53.5 MB
docker.io/prom/node-exporter   v0.18.1   e5a616e4b9cf   16 months ago   24.3 MB

[root@ceph-node1 ceph]$ podman ps
CONTAINER ID  IMAGE                                 COMMAND               CREATED            STATUS                PORTS  NAMES
ea8527650839  docker.io/ceph/ceph:v15               -n osd.1 -f --set...  32 minutes ago     Up 32 minutes ago            ceph-155b61f2-16a1-11eb-92ea-080027f49613-osd.1
ecfb79724d44  docker.io/ceph/ceph:v15               -n osd.0 -f --set...  32 minutes ago     Up 32 minutes ago            ceph-155b61f2-16a1-11eb-92ea-080027f49613-osd.0
ecdb6180f768  docker.io/prom/prometheus:v2.18.1     --config.file=/et...  54 minutes ago     Up 54 minutes ago            ceph-155b61f2-16a1-11eb-92ea-080027f49613-prometheus.ceph-node1
8a8b95151307  docker.io/prom/node-exporter:v0.18.1  --no-collector.ti...  About an hour ago  Up About an hour ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-node-exporter.ceph-node1
0bd45319e177  docker.io/ceph/ceph:v15               -n client.crash.c...  About an hour ago  Up About an hour ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-crash.ceph-node1
c4185c3f4727  docker.io/ceph/ceph:v15               -n mgr.ceph-node1...  About an hour ago  Up About an hour ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-mgr.ceph-node1.eviqce
488c1ae77ddb  docker.io/ceph/ceph:v15               -n mon.ceph-node1...  About an hour ago  Up About an hour ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-mon.ceph-node1

在ceph-node2和ceph-node3节点执行以下命令,查看节点上的镜像和容器。

[root@ceph-node2 ~]$ podman ps
CONTAINER ID  IMAGE                                 COMMAND               CREATED      STATUS          PORTS  NAMES
a3d23e846aea  docker.io/ceph/ceph:v15               -n osd.5 -f --set...  2 hours ago  Up 2 hours ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-osd.5
83dd75b5acd7  docker.io/ceph/ceph:v15               -n osd.4 -f --set...  2 hours ago  Up 2 hours ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-osd.4
1819609f0f18  docker.io/prom/node-exporter:v0.18.1  --no-collector.ti...  2 hours ago  Up 2 hours ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-node-exporter.ceph-node2
3fd7e556f6e0  docker.io/ceph/ceph:v15               -n mon.ceph-node2...  3 hours ago  Up 3 hours ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-mon.ceph-node2
b46fea043180  docker.io/ceph/ceph:v15               -n client.crash.c...  3 hours ago  Up 3 hours ago         ceph-155b61f2-16a1-11eb-92ea-080027f49613-crash.ceph-node2
 
[root@ceph-node2 ~]$ podman image ls
REPOSITORY                     TAG       IMAGE ID       CREATED         SIZE
docker.io/ceph/ceph            v15       4405f6339e35   5 weeks ago     1 GB
docker.io/prom/node-exporter   v0.18.1   e5a616e4b9cf   16 months ago   24.3 MB

参考

https://www.bilibili.com/video/BV1xV411f7Pw

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值