AlmaLinux8.6上用cephadm搭建v17.2.0 Quincy版本ceph集群

一、初始化节点  
1.参考资料
https://docs.ceph.com/en/quincy/
https://github.com/ceph/ceph
https://github.com/ceph/ceph-container/tree/master/src/daemon
https://docs.ceph.com/docs/master/cephadm/
https://docs.ceph.com/docs/master/cephadm/install/

2.环境信息

系统主机名主机IP地址说明
AlmaLinux release 8.6 (Sky Tiger)cephadm192.168.3.40容器子节点(Dashboard、mon、mgr)
AlmaLinux release 8.6 (Sky Tiger)node01192.168.3.41容器主节点(Dashboard、mon、mds、rgw、mgr、osd)
AlmaLinux release 8.6 (Sky Tiger)node02192.168.3.42容器子节点(Dashboard、mon、mds、rgw、mgr、osd)
AlmaLinux release 8.6 (Sky Tiger)node03192.168.3.43容器子节点(Dashboard、mon、mds、rgw、mgr、osd)

3.使用下面脚本初始化配置各个节点

root@x1cg9:~/ceph# cat init.sh 
#!/bin/sh

#安装工具软件
yum install vim net-tools wget lsof python3 -y

#关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config
setenforce 0


#配置主机地址解析
if [ -z "`cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'`" ]; then
cat << EOF >> /etc/hosts
192.168.3.40 cephadm
192.168.3.41 node01
192.168.3.42 node02
192.168.3.43 node03
EOF
fi

#根据IP地址获取主机名并写入hostname
echo `cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'` >/etc/hostname

#重新登录终端立即生效
hostnamectl set-hostname `cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'`

# 内核参数设置:开启IP转发,允许iptables对bridge的数据进行处理
cat << EOF > /etc/sysctl.d/ceph.conf 
kernel.pid_max = 4194303
vm.swappiness = 0
EOF

# 立即生效
sysctl --system

#配置集群时间同步
yum install -y chrony
cat > /etc/chrony.conf << EOF
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
server ntp1.aliyun.com iburst
local stratum 10
allow 192.168.3.0/24
EOF
systemctl restart chronyd
systemctl enable chronyd

二、在4个节点上分别初始化cephadm环境
1.部署podman容器

yum install podman -y

2.获取quincy版本cephadm脚本

curl -O https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm

3.授予执行权限

[root@cephadm ~]# chmod +x cephadm 

3.基于发行版的名称配置ceph仓库

./cephadm add-repo --release quincy
./cephadm install

#查看安装情况
[root@cephadm ~]# which cephadm
/usr/sbin/cephadm

三、部署MON
1.在cephadm上执行部署命令

[root@cephadm ~]# cephadm bootstrap --mon-ip 192.168.3.40

上述指令会为我们完成以下工作:

  • 创建mon
  • 创建ssh key并且添加到 /root/.ssh/authorized_keys 文件
  • 将集群间通信的最小配置写入/etc/ceph/ceph.conf
  • 将client.admin管理secret密钥的副本写入/etc/ceph/ceph.client.admin.keyring。
  • 将公用密钥的副本写入/etc/ceph/ceph.pub

2.执行结果如下:

[root@cephadm ~]# cephadm bootstrap --mon-ip 192.168.3.40
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.0.2 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 884b19fc-dd09-11ec-9b70-000c293bb4ab
Verifying IP 192.168.3.40 port 3300 ...
Verifying IP 192.168.3.40 port 6789 ...
Mon IP `192.168.3.40` is in CIDR network `192.168.3.0/24`
- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.3.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host cephadm...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

             URL: https://cephadm:8443/
            User: admin
        Password: vj36w5o4lk

Enabling client.admin keyring and conf on hosts with "admin" label
Enabling autotune for osd_memory_target
You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

Bootstrap complete.

  
3.cephadm shell命令在装有所有Ceph软件包的容器中启动bash shell。默认情况下,如果在主机上的/etc/ceph中找到配置文件和keyring文件,它们将被传递到容器环境中,从而使Shell可以正常运行。 若是在MON主机上执行时,cephadm Shell将使用MON容器的配置,而不是使用默认配置。 如果给出了–mount ,则主机 (文件或目录)将出现在容器内的/mnt下

#为了方便使用,可以给cephadm shell换个名字
[root@cephadm ~]# alias ceph='cephadm shell -- ceph'

想要永久生效可以编辑/etc/.bashrc
[root@cephadm ~]# echo "alias ceph='cephadm shell -- ceph'" >>/root/.bashrc

4.通过https://192.168.3.40:8443/访问集群,修改默认密码

四、添加新的节点到集群
1.如果要添加新的节点到集群,要将ssh 公钥推送到新的节点authorized_keys文件中

[root@cephadm ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node01
[root@cephadm ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node02
[root@cephadm ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node03

2.告诉Ceph,新节点是集群的一部分

[root@cephadm ~]# ceph orch host add node01
[root@cephadm ~]# ceph orch host add node02
[root@cephadm ~]# ceph orch host add node03

3.添加mon
  一个典型的Ceph集群具有三个或五个分布在不同主机上的mon守护程序。 如果群集中有五个或更多节点,建议部署五个监视器。
  当Ceph知道监视器应该使用哪个IP子网时,它可以随着群集的增长(或收缩)自动部署和扩展mon。 默认情况下,Ceph假定其他mon应使用与第一台mon的IP相同的子网。
  如果的Ceph mon(或整个群集)位于单个子网中,则默认情况下,向群集中添加新主机时,cephadm会自动最多添加5个监视器。 无需其他步骤。
  本次4个节点,调整默认4个mon

[root@ceph-admin ~]# ceph orch apply mon 4

4.部署mon到指定节点

[root@cephadm ~]# ceph orch apply mon cephadm,node01,node02,node03

#查看mon集群状态
[root@cephadm ~]# ceph mon dump
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
epoch 4
fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
last_changed 2022-05-26T16:25:29.985665+0000
created 2022-05-26T15:36:22.803454+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:192.168.3.40:3300/0,v1:192.168.3.40:6789/0] mon.cephadm
1: [v2:192.168.3.41:3300/0,v1:192.168.3.41:6789/0] mon.node01
2: [v2:192.168.3.42:3300/0,v1:192.168.3.42:6789/0] mon.node02
3: [v2:192.168.3.43:3300/0,v1:192.168.3.43:6789/0] mon.node03
dumped monmap epoch 4

四、部署mgr

# 部署
ceph orch apply mgr 4
ceph orch apply mgr 4ceph orch apply mgr cephadm,node01,node02,node03

# 查看mgr部署状态
ceph -s

五、部署osd
1.需要满足以下所有条件,存储设备才被认为是可用的:

  • 设备没有分区
  • 设备不得具有任何LVM状态。
  • 设备没有挂载
  • 设备不包含任何文件系统
  • 设备不包含ceph bluestore osd
  • 设备必须大于5G
  • 并且ceph不会在不可用的设备上创建osd。
# 通过fdisk查看node01系统硬盘
[root@node01 ~]# fdisk -l | grep Disk
Disk /dev/nvme0n1:20 GiB,21474836480 字节,41943040 个扇区
Disk /dev/nvme0n2:10 GiB,10737418240 字节,20971520 个扇区
Disk /dev/nvme0n3:10 GiB,10737418240 字节,20971520 个扇区
Disk /dev/mapper/almalinux-root:17 GiB,18249416704 字节,35643392 个扇区
Disk /dev/mapper/almalinux-swap:2 GiB,2147483648 字节,4194304 个扇区

#添加node01的第一块硬盘
[root@cephadm ~]# ceph orch daemon add osd node01:/dev/nvme0n2

#查看osd列表,是否添加正常
[root@cephadm ~]#  ceph osd tree
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.00980  root default                              
-3         0.00980      host node02                           
 0    ssd  0.00980          osd.0        up   1.00000  1.00000
[root@cephadm ~]# 

#添加node01,node02,node03的剩余5块硬盘
[root@cephadm ~]# ceph orch daemon add osd node01:/dev/nvme0n3
[root@cephadm ~]# ceph orch daemon add osd node02:/dev/nvme0n2
[root@cephadm ~]# ceph orch daemon add osd node02:/dev/nvme0n3
[root@cephadm ~]# ceph orch daemon add osd node03:/dev/nvme0n2
[root@cephadm ~]# ceph orch daemon add osd node03:/dev/nvme0n3


#查看osd列表,是否添加正常
[root@cephadm ~]# ceph osd tree
Inferring fsid 406bb2b6-dd5b-11ec-bff5-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.05878  root default                              
-3         0.01959      host node01                           
 0    ssd  0.00980          osd.0        up   1.00000  1.00000
 1    ssd  0.00980          osd.1        up   1.00000  1.00000
-5         0.01959      host node02                           
 2    ssd  0.00980          osd.2        up   1.00000  1.00000
 3    ssd  0.00980          osd.3        up   1.00000  1.00000
-7         0.01959      host node03                           
 4    ssd  0.00980          osd.4        up   1.00000  1.00000
 5    ssd  0.00980          osd.5      down   1.00000  1.00000

2.查看集群状态

#这时我们的集群状态应该是ok了
[root@cephadm ~]# ceph -s
Inferring fsid 406bb2b6-dd5b-11ec-bff5-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
  cluster:
    id:     406bb2b6-dd5b-11ec-bff5-000c293bb4ab
    health: HEALTH_OK
 
  services:
    mon: 4 daemons, quorum cephadm,node01,node02,node03 (age 7m)
    mgr: cephadm.rxroyy(active, since 12m), standbys: node01.zwjwka, node02.evfxxx, node03.rxzaak
    osd: 6 osds: 6 up (since 4m), 6 in (since 4m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   49 MiB used, 60 GiB / 60 GiB avail
    pgs:     1 active+clean

六、部署的MDS
  要使用CephFS文件系统,需要一个或多个MDS守护程序。如果使用较新的界面来创建新的文件系统,则会自动创建这些文件。
参见:https://docs.ceph.com/en/latest/cephfs/fs-volumes/#fs-volumes-and-subvolumes

1.部署元数据服务器

[root@cephadm ~]# ceph osd pool create cephfs_data 64 64
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
pool 'cephfs_data' created

[root@cephadm ~]# ceph osd pool create cephfs_metadata 64 64
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
pool 'cephfs_metadata' created

[root@cephadm ~]# ceph fs new cephfs cephfs_metadata cephfs_data
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
new fs with metadata pool 3 and data pool 2

[root@cephadm ~]# ceph fs ls
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

[root@cephadm ~]# ceph orch apply mds cephfs --placement="3 node01 node02 node03"
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
Scheduled mds.cephfs update...

2.查看节点各启动了一个mds容器

[root@node01 ~]# podman ps | grep mds
776fa6f88fc0  quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac  -n mds.cephfs.nod...  About a minute ago  Up About a minute ago              ceph-884b19fc-dd09-11ec-9b70-000c293bb4ab-mds-cephfs-node01-jieqka

3.查看集群状态

[root@cephadm ~]# ceph -s
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
  cluster:
    id:     884b19fc-dd09-11ec-9b70-000c293bb4ab
    health: HEALTH_OK
 
  services:
    mon: 4 daemons, quorum cephadm,node01,node02,node03 (age 75m)
    mgr: cephadm.klqtwm(active, since 2h), standbys: node01.znvytm
    mds: 1/1 daemons up, 2 standby
    osd: 6 osds: 6 up (since 70m), 6 in (since 71m)
 
  data:
    volumes: 1/1 healthy
    pools:   3 pools, 100 pgs
    objects: 24 objects, 451 KiB
    usage:   97 MiB used, 60 GiB / 60 GiB avail
    pgs:     100 active+clean
 
  progress:
    Global Recovery Event (20s)
      [===========================.] 

七、部署RGWS  
  cephadm将radosgw部署为管理特定realm和zone的守护程序的集合。详见:https://docs.ceph.com/en/latest/radosgw/multisite/#multisite

  注意,使用cephadm时,radosgw守护程序是通过监视器配置数据库而不是通过ceph.conf或命令行来配置的。 如果该配置尚未就绪(通常在client.rgw。。部分中),那么radosgw守护程序将使用默认设置(例如,绑定到端口80)启动。
  例如,要在node01、node02和node03上部署3个服务于myrealm领域和cn-east-8区域的rgw守护程序:

#安装radosgw工具
yum install -y ceph-common-2:17.2.0

#如果尚未创建领域,请首先创建一个领域:
[root@cephadm ~]# radosgw-admin realm create --rgw-realm=myrealm --default

#接下来创建一个新的区域组:
[root@cephadm ~]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default

#接下来创建一个区域:
[root@cephadm ~]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-8 --master --default

#为特定领域和区域部署一组radosgw守护程序:
[root@cephadm ~]# ceph orch apply rgw myrealm cn-east-8 --placement="3 node01 node02 node03"

八、查看prometheus监控信息

访问grafana:https://192.168.3.40:3000/
访问Dashboard:https://192.168.3.40:8443/

九、常用指令
1.显示当前的orchestrator模式和高级状态

[root@cephadm ~]# ceph orch status
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
Backend: cephadm
Available: Yes
Paused: No

2.显示集群内的主机

[root@cephadm ~]# ceph orch host ls
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
HOST     ADDR          LABELS  STATUS  
cephadm  192.168.3.40  _admin          
node01   192.168.3.41                  
node02   192.168.3.42                  
node03   192.168.3.43                  
4 hosts in cluster

3.添加/移除主机

ceph orch host add <hostname> [<addr>] [<labels>...]
ceph orch host rm <hostname>

也可以通过使用yaml文件,通过 ceph orch apply -i

---
service_type: host
addr: node-00
hostname: node-00
labels:
- example1
- example2
---
service_type: host
addr: node-01
hostname: node-01
labels:
- grafana
---
service_type: host
addr: node-02
hostname: node-02

4.显示发现的设备

[root@cephadm ~]# ceph orch device ls
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
HOST     PATH          TYPE  DEVICE ID                                   SIZE  AVAILABLE  REJECT REASONS                                                 
cephadm  /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000  10.7G  Yes                                                                       
cephadm  /dev/nvme0n3  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000  10.7G  Yes                                                                       
node01   /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000  10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
node01   /dev/nvme0n3  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000  10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
node02   /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000  10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
node02   /dev/nvme0n3  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000  10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
node03   /dev/nvme0n2  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000  10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
node03   /dev/nvme0n3  ssd   VMware_Virtual_NVMe_Disk_VMware_NVME_0000  10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  

5.创建osd

ceph orch daemon add osd :device1,device2

6.移除osd

ceph orch osd rm <svc_id>... [--replace] [--force]
例如
ceph orch osd rm 4

7.查看service状态

[root@cephadm ~]# ceph orch ls
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
NAME           PORTS        RUNNING  REFRESHED  AGE  PLACEMENT                     
alertmanager   ?:9093,9094      1/1  3m ago     83m  count:1                       
crash                           4/4  9m ago     84m  *                             
grafana        ?:3000           1/1  3m ago     83m  count:1                       
mgr                             2/2  9m ago     84m  count:2                       
mon                             4/4  9m ago     33m  cephadm;node01;node02;node03  
node-exporter  ?:9100           4/4  9m ago     83m  *                             
osd                               6  9m ago     -    <unmanaged>                   
prometheus     ?:9095           1/1  3m ago     83m  count:1                       

8.查看daemon 状态

[root@cephadm ~]# ceph orch ps
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
NAME                   HOST     PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
alertmanager.cephadm   cephadm  *:9093,9094  running (36m)     4m ago  84m    25.4M        -           ba2b418f427c  4622f3758f67  
crash.cephadm          cephadm               running (84m)     4m ago  84m    6995k        -  17.2.0   132aa60d26c9  31e35384f2c5  
crash.node01           node01                running (38m)    19s ago  38m    7159k        -  17.2.0   132aa60d26c9  abd42f3ececf  
crash.node02           node02                running (37m)    81s ago  37m    7155k        -  17.2.0   132aa60d26c9  3125834e40b1  
crash.node03           node03                running (36m)    19s ago  36m    7142k        -  17.2.0   132aa60d26c9  d38b88559744  
grafana.cephadm        cephadm  *:3000       running (83m)     4m ago  84m    52.9M        -  8.3.5    dad864ee21e9  671437284a2a  
mgr.cephadm.klqtwm     cephadm  *:9283       running (85m)     4m ago  85m     481M        -  17.2.0   132aa60d26c9  705d70d3d618  
mgr.node01.znvytm      node01   *:8443,9283  running (38m)    19s ago  38m     424M        -  17.2.0   132aa60d26c9  ac9586483557  
mon.cephadm            cephadm               running (85m)     4m ago  85m    67.0M    2048M  17.2.0   132aa60d26c9  3618ec74b082  
mon.node01             node01                running (38m)    19s ago  38m    49.3M    2048M  17.2.0   132aa60d26c9  3e73b1390c82  
mon.node02             node02                running (36m)    81s ago  36m    50.7M    2048M  17.2.0   132aa60d26c9  6ed40af0cc4b  
mon.node03             node03                running (36m)    19s ago  36m    50.2M    2048M  17.2.0   132aa60d26c9  b257745af316  
node-exporter.cephadm  cephadm  *:9100       running (83m)     4m ago  83m    23.6M        -           1dbe0e931976  70b4e06776ff  
node-exporter.node01   node01   *:9100       running (38m)    19s ago  38m    24.2M        -           1dbe0e931976  8fcbcdfaa656  
node-exporter.node02   node02   *:9100       running (36m)    81s ago  36m    21.8M        -           1dbe0e931976  5c33b34f2665  
node-exporter.node03   node03   *:9100       running (36m)    19s ago  36m    24.0M        -           1dbe0e931976  40ea79713dc5  
osd.0                  node02                running (25m)    81s ago  25m    51.4M    1619M  17.2.0   132aa60d26c9  52af2ebc9400  
osd.1                  node02                running (21m)    81s ago  21m    51.0M    1619M  17.2.0   132aa60d26c9  f861e2fcbd38  
osd.2                  node01                running (21m)    19s ago  21m    48.4M    4096M  17.2.0   132aa60d26c9  0aef2053fa4e  
osd.3                  node01                running (21m)    19s ago  21m    48.2M    4096M  17.2.0   132aa60d26c9  12909ffc8a14  
osd.4                  node03                running (21m)    19s ago  21m    50.2M    1619M  17.2.0   132aa60d26c9  a1ad664059db  
osd.5                  node03                running (20m)    19s ago  20m    48.9M    1619M  17.2.0   132aa60d26c9  c651940e3783  
prometheus.cephadm     cephadm  *:9095       running (36m)     4m ago  83m    73.9M        -           514e6a882f6e  bde27d0e9293  

9.查看指定的daemon状态

[root@cephadm ~]# ceph orch ps --daemon_type osd --daemon_id 0
Inferring fsid 884b19fc-dd09-11ec-9b70-000c293bb4ab
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
NAME   HOST    PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
osd.0  node02         running (26m)     2m ago  26m    51.4M    1619M  17.2.0   132aa60d26c9  52af2ebc9400  

10.service/daemon 的start/stop/reload

ceph orch service {stop,start,reload} <type> <name>
ceph orch daemon {start,stop,reload} <type> <daemon-id>

11.访问dashboard界面
  在部署完成时,其实是可以看到一个可以通过 hostip:8443访问dashboard的地址。并且提供了用户名和密码。如果忘记了,可以通过重新设置admin的密码进行修改。

# 登录 Cephadm shell
cephadm shell

# 创建 dashboard_password.yml 文件
touch dashboard_password.yml

# 编辑file 输入新 dashboard 密码
vi dashboard_password.yml

#执行修改密码命令
[root@ceph-admin ~]# ceph dashboard ac-user-set-password admin -i dashboard_password.yml

12.当部署出现问题可以执行以下命令查看详细信息。

ceph log last cephadm

13.也可以直接查看Service级别或Daemon级别的日志

ceph orch ls --service_name=alertmanager --format yaml
ceph orch ps --service-name <service-name> --daemon-id <daemon-id> --format yaml
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要在Rocky Linux上安装Spring Boot,您需要先配置yum源为阿里云,并更新缓存。可以使用以下命令更改yum源为阿里云并更新缓存: ``` sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \ -i.bak \ /etc/yum.repos.d/Rocky-*.repo dnf makecache ``` 接下来,您需要安装Redis和Nginx,您可以根据需要选择单机版或集群版配置。以下是示例配置: 单机版配置: ``` #配置Redis的配置信息(单机版) #spring.redis.host=192.168.188.128 #spring.redis.port=6379 #spring.redis.password=123456 ``` 集群版配置: ``` #配置Redis的配置信息(集群版,连接哨兵集群) #指定所有哨兵服务的IP 端口多个地址使用逗号分隔 spring.redis.sentinel.nodes=192.168.19.129:26380,192.168.19.129:26382,192.168.19.129:26384 #指定集群名称取值来自哨兵配置文件 spring.redis.sentinel.master=mymaster #指定Redis的访问密码 spring.redis.password=123456 ``` 最后,您需要关闭SELinux和防火墙: ``` sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config systemctl disable firewalld systemctl stop firewalld ``` 以上是在Rocky Linux上安装Spring Boot的一般步骤。请确保按照您的实际需求进行适当的配置和安装。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [cephadm部署ceph-quincy版(rocky8.6)](https://blog.csdn.net/w975121565/article/details/126770631)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *2* [Linux和部署在Linux上的相关软件](https://blog.csdn.net/Ssucre/article/details/115542747)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值