AlmaLinux8.6上用cephadm搭建v17.2.0 Quincy版本ceph集群

一、初始化节点  
1.参考资料
https://docs.ceph.com/en/quincy/
https://github.com/ceph/ceph
https://github.com/ceph/ceph-container/tree/master/src/daemon
https://docs.ceph.com/docs/master/cephadm/
https://docs.ceph.com/docs/master/cephadm/install/

2.主机环境信息

系统主机名主机配置ceph配置说明
AlmaLinux release 8.6 (Sky Tiger)ceph01192.168.3.51,系统盘容器子节点(Dashboard、mon、mgr)
AlmaLinux release 8.6 (Sky Tiger)ceph02192.168.3.52,系统盘容器子节点(Dashboard、mon、mgr)
AlmaLinux release 8.6 (Sky Tiger)ceph03192.168.3.53,系统盘容器子节点(Dashboard、mon、mgr)
AlmaLinux release 8.6 (Sky Tiger)ceph04192.168.3.54,系统盘+2*10G数据盘容器子节点(Dashboard、mon、mds、rgw、mgr、osd)
AlmaLinux release 8.6 (Sky Tiger)ceph05192.168.3.56,系统盘+2*10G数据盘容器子节点(Dashboard、mon、mds、rgw、mgr、osd)
AlmaLinux release 8.6 (Sky Tiger)node06192.168.3.56,系统盘+2*10G数据盘容器子节点(Dashboard、mon、mds、rgw、mgr、osd)

3.初始化配置各个节点为模板机

#安装工具软件
yum install vim net-tools wget lsof python3 -y

#关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config
setenforce 0


#配置主机地址解析
if [ -z "`cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'`" ]; then
cat << EOF >> /etc/hosts
192.168.3.51 ceph01
192.168.3.52 ceph02
192.168.3.53 ceph03
192.168.3.54 ceph04
192.168.3.55 ceph05
192.168.3.56 ceph06
EOF
fi

#根据IP地址获取主机名并写入hostname
echo `cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'` >/etc/hostname

#重新登录终端立即生效
hostnamectl set-hostname `cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'`

# 内核参数设置:开启IP转发,允许iptables对bridge的数据进行处理
cat << EOF > /etc/sysctl.d/ceph.conf 
kernel.pid_max = 4194303
vm.swappiness = 0
EOF

# 立即生效
sysctl --system

#设置时区为东八区
rm -f /etc/localtime
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

#配置集群时间同步
yum install -y chrony
cat > /etc/chrony.conf << EOF
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
server ntp1.aliyun.com iburst
local stratum 10
allow 192.168.3.0/24
EOF
systemctl restart chronyd
systemctl enable chronyd

4.在所有节点安装配置docker-ce

#安装部署docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce
systemctl enable docker && systemctl start docker

二、初始化cephadm环境
1.获取quincy版本cephadm脚本

curl -O https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm

2.授予执行权限

[root@cephadm ~]# chmod +x cephadm 

3.基于发行版的名称配置ceph仓库

./cephadm add-repo --release quincy
./cephadm install

#查看安装情况
[root@cephadm ~]# which cephadm
/usr/sbin/cephadm

#安装ceph-common ceph-fuse
cephadm install ceph-common ceph-fuse

三、部署MON
1.在ceph01上执行部署命令

[root@ceph01 ~]# cephadm bootstrap --mon-ip 192.168.3.51

上述指令会为我们完成以下工作:

  • 创建mon
  • 创建ssh key并且添加到 /root/.ssh/authorized_keys 文件
  • 将集群间通信的最小配置写入/etc/ceph/ceph.conf
  • 将client.admin管理secret密钥的副本写入/etc/ceph/ceph.client.admin.keyring。
  • 将公用密钥的副本写入/etc/ceph/ceph.pub

2.执行结果如下:

[root@ceph01 ~]# cephadm bootstrap --mon-ip 192.168.3.51
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 2765f0bc-e461-11ec-8881-525400062554
Verifying IP 192.168.3.51 port 3300 ...
Verifying IP 192.168.3.51 port 6789 ...
Mon IP `192.168.3.51` is in CIDR network `192.168.3.0/24`
- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.3.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

             URL: https://ceph01:8443/
            User: admin
        Password: e0jg4mikb4

Enabling client.admin keyring and conf on hosts with "admin" label
Enabling autotune for osd_memory_target
You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid 2765f0bc-e461-11ec-8881-525400062554 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

Bootstrap complete.

  
3.cephadm shell命令在装有所有Ceph软件包的容器中启动bash shell。默认情况下,如果在主机上的/etc/ceph中找到配置文件和keyring文件,它们将被传递到容器环境中,从而使Shell可以正常运行。 若是在MON主机上执行时,cephadm Shell将使用MON容器的配置,而不是使用默认配置。 如果给出了–mount ,则主机 (文件或目录)将出现在容器内的/mnt下

#为了方便使用,可以给cephadm shell换个名字
[root@cephadm ~]# alias ceph='cephadm shell -- ceph'

想要永久生效可以编辑/etc/.bashrc
[root@cephadm ~]# echo "alias ceph='cephadm shell -- ceph'" >>/root/.bashrc

4.通过https://192.168.3.51:8443/访问集群,修改默认密码

四、添加新的节点到集群
1.如果要添加新的节点到集群,要将ssh 公钥推送到新的节点authorized_keys文件中

[root@ceph01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph02
[root@ceph01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph03
[root@ceph01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph04
[root@ceph01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph05
[root@ceph01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph06

2.告诉Ceph,新节点是集群的一部分

[root@ceph01 ~]# ceph orch host add ceph02
[root@ceph01 ~]# ceph orch host add ceph03
[root@ceph01 ~]# ceph orch host add ceph04
[root@ceph01 ~]# ceph orch host add ceph05
[root@ceph01 ~]# ceph orch host add ceph06

3.添加mon
  一个典型的Ceph集群具有三个或五个分布在不同主机上的mon守护程序。 如果群集中有五个或更多节点,建议部署五个监视器。
  当Ceph知道监视器应该使用哪个IP子网时,它可以随着群集的增长(或收缩)自动部署和扩展mon。 默认情况下,Ceph假定其他mon应使用与第一台mon的IP相同的子网。
  如果的Ceph mon(或整个群集)位于单个子网中,则默认情况下,向群集中添加新主机时,cephadm会自动最多添加5个监视器。 无需其他步骤。
  本次6个节点,调整默认3个mon

[root@ceph01 ~]# ceph orch apply mon 3

4.部署mon到指定节点

[root@ceph01 ~]# ceph orch apply mon ceph01,ceph02,ceph03

#查看mon集群状态
[root@ceph01 ~]# ceph mon dump
Inferring fsid 2765f0bc-e461-11ec-8881-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
epoch 7
fsid 2765f0bc-e461-11ec-8881-525400062554
last_changed 2022-06-05T00:22:16.501413+0000
created 2022-06-04T23:51:12.070622+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:192.168.3.51:3300/0,v1:192.168.3.51:6789/0] mon.ceph01
1: [v2:192.168.3.52:3300/0,v1:192.168.3.52:6789/0] mon.ceph02
2: [v2:192.168.3.53:3300/0,v1:192.168.3.53:6789/0] mon.ceph03
dumped monmap epoch 7

四、部署mgr

# 部署
ceph orch apply mgr 3
ceph orch apply mgr ceph01,ceph02,ceph03

# 查看mgr部署状态
[root@ceph01 ~]# ceph -s
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
  cluster:
    id:     89f1e29e-e46c-11ec-a0d7-525400062554
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 84s)
    mgr: ceph01.qeypru(active, since 5m), standbys: ceph02.qfjxfj
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

五、部署osd
1.需要满足以下所有条件,存储设备才被认为是可用的:

  • 设备没有分区
  • 设备不得具有任何LVM状态。
  • 设备没有挂载
  • 设备不包含任何文件系统
  • 设备不包含ceph bluestore osd
  • 设备必须大于5G
  • 并且ceph不会在不可用的设备上创建osd。
# 通过lsblk查看ceph04系统硬盘
[root@ceph04 ~]# lsblk
NAME               MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda                252:0    0  20G  0 disk 
├─vda1             252:1    0   1G  0 part /boot
└─vda2             252:2    0  19G  0 part 
  ├─almalinux-root 253:0    0  17G  0 lvm  /
  └─almalinux-swap 253:1    0   2G  0 lvm  [SWAP]
vdb                252:16   0  10G  0 disk 
vdc                252:32   0  10G  0 disk 

#添加ceph04的第一块硬盘
[root@ceph01 ~]# ceph orch daemon add osd ceph04:/dev/vdb

#查看osd列表,是否添加正常
[root@ceph01 ~]# ceph osd tree
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.00980  root default                              
-3         0.00980      host ceph04                           
 0    hdd  0.00980          osd.0        up   1.00000  1.00000

#添加ceph04,ceph05,ceph06的剩余5块硬盘
ceph orch daemon add osd ceph04:/dev/vdc
ceph orch daemon add osd ceph05:/dev/vdb
ceph orch daemon add osd ceph05:/dev/vdc
ceph orch daemon add osd ceph06:/dev/vdb
ceph orch daemon add osd ceph04:/dev/vdc

#查看osd列表,是否添加正常
[root@ceph01 ~]# ceph osd tree
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.05878  root default                              
-3         0.01959      host ceph04                           
 0    hdd  0.00980          osd.0        up   1.00000  1.00000
 1    hdd  0.00980          osd.1        up   1.00000  1.00000
-5         0.01959      host ceph05                           
 2    hdd  0.00980          osd.2        up   1.00000  1.00000
 3    hdd  0.00980          osd.3        up   1.00000  1.00000
-7         0.01959      host ceph06                           
 4    hdd  0.00980          osd.4        up   1.00000  1.00000
 5    hdd  0.00980          osd.5        up   1.00000  1.00000

2.查看集群状态

#这时我们的集群状态应该是ok了
[root@ceph01 ~]# ceph -s
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
  cluster:
    id:     89f1e29e-e46c-11ec-a0d7-525400062554
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 5m)
    mgr: ceph01.qeypru(active, since 9m), standbys: ceph02.qfjxfj, ceph03.twzhrp
    osd: 6 osds: 6 up (since 40s), 6 in (since 56s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   122 MiB used, 60 GiB / 60 GiB avail
    pgs:     1 active+clean

六、部署的MDS
  要使用CephFS文件系统,需要一个或多个MDS守护程序。如果使用较新的界面来创建新的文件系统,则会自动创建这些文件。
参见:https://docs.ceph.com/en/latest/cephfs/fs-volumes/#fs-volumes-and-subvolumes

1.部署元数据服务器,创建一个CephFS, 名字为cephfs

ceph fs volume create cephfs
ceph fs volume ls
ceph orch apply mds cephfs --placement="3 ceph04 ceph05 ceph06"
ceph fs status cephfs
ceph mds stat
ceph osd dump |grep pool | awk '{print $1,$3,$4,$5":"$6,$13":"$14}'
ceph orch ps --daemon-type mds
ceph osd pool autoscale-status

2.查看集群状态,验证至少有一个 MDS 已经进入 Active 状态

[root@ceph01 ~]# ceph fs status cephfs
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
cephfs - 0 clients
======
RANK  STATE           MDS              ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs.ceph05.ednqfo  Reqs:    0 /s    10     13     12      0   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata  96.0k  18.9G  
cephfs.cephfs.data    data       0   18.9G  
    STANDBY MDS       
cephfs.ceph04.hostvc  
cephfs.ceph06.vfwdti  
MDS version: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)

3.创建一个用户,用于访问CephFs

#创建用户cephfs
ceph auth get-or-create client.cephfs \
    mon 'allow r' \
    mds 'allow r, allow rw path=/' \
    osd 'allow rw pool=cephfs' \
    -o ceph.client.cephfs.keyring

#输出:
[client.cephfs]
        key = AQAxCpxil+ZqCxAAlBnleCuNAxslP/7aPlZYzA==

#查看权限
[root@ceph01 ~]# ceph auth get client.cephfs
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
[client.cephfs]
        key = AQAxCpxil+ZqCxAAlBnleCuNAxslP/7aPlZYzA==
        caps mds = "allow r, allow rw path=/"
        caps mon = "allow r"
        caps osd = "allow rw pool=cephfs"
exported keyring for client.cephfs

4.kernel client 形式挂载 CephFS(推荐使用)

#创建权限密码
ceph auth get client.cephfs -o ceph.client.cephfs.keyring

#把权限文件拷贝到要挂载cephfs的机器上,本例为192.168.3.40
scp ceph.client.cephfs.keyring 192.168.3.40:/etc/ceph/

#在192.168.3.40上
mkdir -p /cephfs
mount -t ceph \
    192.168.3.51:6789,192.168.3.52:6789,192.168.3.53:6789:/ \
    /cephfs -o name=cephfs,secret=AQAxCpxil+ZqCxAAlBnleCuNAxslP/7aPlZYzA==

#验证是否挂载成功
[root@cephtest cephfs]# stat -f /cephfs
  文件:"/cephfs"
    ID:72844ccdffffffff 文件名长度:255     类型:ceph
块大小:4194304    基本块大小:4194304
    块:总计:4850       空闲:4850       可用:4850
Inodes: 总计:0          空闲:-1

5.如果使用较旧的内核,则应使用fuse客户端挂载

#安装客户端
cephadm install ceph-fuse

#ceph-fuse挂载
ceph-fuse --id cephfs -m 192.168.3.51:6789,192.168.3.52:6789,192.168.3.53:6789 /cephfs -o nonempty

#验证是否挂载成功
stat -f /cephfs

6.自动挂载

#修改/etc/fstab文件 
echo "192.168.5.31:6789,192.168.5.32:6789,192.168.5.33:6789:/     /cephfs ceph name=cephfs,secret=AQAxCpxil+ZqCxAAlBnleCuNAxslP/7aPlZYzA==,_netdev,noatime 0 0" |tee -a /etc/fstab

#挂载
mount -a

#查看挂载情况
[root@cephtest ~]# df -TH
文件系统                                                类型      容量  已用  可用 已用% 挂载点
devtmpfs                                                devtmpfs  2.0G     0  2.0G    0% /dev
tmpfs                                                   tmpfs     2.0G     0  2.0G    0% /dev/shm
tmpfs                                                   tmpfs     2.0G   34M  2.0G    2% /run
tmpfs                                                   tmpfs     2.0G     0  2.0G    0% /sys/fs/cgroup
/dev/mapper/almalinux-root                              xfs        19G  7.5G   11G   41% /
/dev/nvme0n1p1                                          xfs       1.1G  221M  843M   21% /boot
overlay                                                 overlay    19G  7.5G   11G   41% /var/lib/docker/overlay2/ac0569d1cb257e3d0ba2feaeb8bf2fd867b29e1af6a3355cc6253bf9ff0ed891/merged
tmpfs                                                   tmpfs     389M     0  389M    0% /run/user/0
192.168.3.51:6789,192.168.3.52:6789,192.168.3.53:6789:/ ceph       21G     0   21G    0% /cephfs

7.删除文件系统ceptfs

ceph config set mon mon_allow_pool_delete true
ceph fs volume rm cephfs --yes-i-really-mean-it
ceph config set mon mon_allow_pool_delete false

#再次查看
[root@ceph01 ~]# ceph osd pool ls
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
.mgr

七、部署RGWS  
  cephadm将radosgw部署为管理特定realm和zone的守护程序的集合。详见:https://docs.ceph.com/en/latest/radosgw/multisite/#multisite

  注意,使用cephadm时,radosgw守护程序是通过监视器配置数据库而不是通过ceph.conf或命令行来配置的。 如果该配置尚未就绪(通常在client.rgw。。部分中),那么radosgw守护程序将使用默认设置(例如,绑定到端口80)启动。

1.在ceph01、ceph02和ceph03上部署3个服务于myorg领域和cn-beijing 区域的rgw守护程序:

#如果尚未创建领域,请首先创建一个领域:
radosgw-admin realm create --rgw-realm=myorg --default

#接下来创建一个新的区域组:
radosgw-admin zonegroup create --rgw-zonegroup=default --master --default

#接下来创建一个区域:
radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-beijing  --master --default

#为特定领域和区域部署一组radosgw守护程序:
ceph orch apply rgw myorg cn-beijing --placement="3 ceph01 ceph02 ceph03"

# 查看各节点 rgw 是否启动
[root@ceph01 ~]# ceph orch ps --daemon-type rgw
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
NAME                     HOST    PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
rgw.myorg.ceph01.sshton  ceph01  *:80   running (43s)    37s ago  43s    45.1M        -  17.2.0   132aa60d26c9  815f60287dbf  
rgw.myorg.ceph02.xyivtz  ceph02  *:80   running (42s)    38s ago  42s    19.0M        -  17.2.0   132aa60d26c9  e792b4c6b480  
rgw.myorg.ceph03.zoxgbu  ceph03  *:80   running (41s)    38s ago  41s    19.2M        -  17.2.0   132aa60d26c9  4b538b7286da  

# 创建创建 radosgw 用户
radosgw-admin user create --uid="admin" --display-name="admin user"

# 创建完成之后需要把access_key和secret_key保存下来,也可以使用下面的命令来查看
radosgw-admin user info --uid=admin
"keys": [
        {
            "user": "admin",
            "access_key": "GSETKTJC5FULSI9W8Y2M",
            "secret_key": "Kbso9DYuWzqMVsmAYgVxb8HOKdPibUE0g6yhCBVk"
        }
    ],

2.测试rgw

#在客户端安装s3cmd软件
yum install s3cmd -y

#生成配置文件
[root@cephtest ~]# s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: GSETKTJC5FULSI9W8Y2M                        # 粘贴服务端生成的Access Key
Secret Key: Kbso9DYuWzqMVsmAYgVxb8HOKdPibUE0g6yhCBVk    # 粘贴服务端生成的Secret Key
Default Region [US]:                                    # 直接回车即可

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 192.168.3.51            # 输入对象存储的IP地址

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 192.168.3.51

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:                                  # 空密码回车
Path to GPG program [/usr/bin/gpg]:                   # /usr/bin/gpg命令路径 回车

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no                          # 是否使用https,选no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:                               # haproxy 留空回车

New settings:
  Access Key: GSETKTJC5FULSI9W8Y2M
  Secret Key: Kbso9DYuWzqMVsmAYgVxb8HOKdPibUE0g6yhCBVk
  Default Region: US
  S3 Endpoint: 192.168.3.51
  DNS-style bucket+hostname:port template for accessing a bucket: 192.168.3.51
  Encryption password: 
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y                          # y 要保存配置文件
Configuration saved to '/root/.s3cfg'           # 最后配置文件保存的位置/root.s3cfg
[root@cephtest ~]# 

# 创建my-bucket桶
[root@cephtest ~]# s3cmd mb s3://my-bucket
Bucket 's3://my-bucket/' created

# 查看所有的桶
[root@cephtest ~]# s3cmd ls  
2022-06-05 03:42  s3://my-bucket

# 向指定桶中上传/etc/hosts文件
[root@cephtest ~]# s3cmd put /etc/hosts s3://my-bucket 
upload: '/etc/hosts' -> 's3://my-bucket/hosts'  [1 of 1]
 270 of 270   100% in    2s   116.81 B/s  done

# 显示my-bucket中的文件
[root@cephtest ~]# s3cmd ls s3://my-bucket       
2022-06-05 03:44          270  s3://my-bucket/hosts

# 删除my-bucket中的hosts文件
[root@cephtest ~]# s3cmd del s3://my-bucket/hosts 
delete: 's3://my-bucket/hosts'

# 删除my-bucket
[root@cephtest ~]# s3cmd rb s3://my-bucket
Bucket 's3://my-bucket/' removed

八、部署RBD
1.配置RBD

# 创建 RBD
[root@ceph01 ~]# ceph osd pool create rbd 16
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
pool 'rbd' created

# application enable RBD
[root@ceph01 ~]# ceph osd pool application enable rbd rbd

Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
enabled application 'rbd' on pool 'rbd'

# 创建 rbd 存储, 指定大小为 10GB
[root@ceph01 ~]# rbd create rbd1 --size 10240

# 查看 rbd 信息
[root@ceph01 ~]# rbd --image rbd1 info
rbd image 'rbd1':
        size 10 GiB in 2560 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 39e420f709ce
        block_name_prefix: rbd_data.39e420f709ce
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features: 
        flags: 
        create_timestamp: Sun Jun  5 10:30:58 2022
        access_timestamp: Sun Jun  5 10:30:58 2022
        modify_timestamp: Sun Jun  5 10:30:58 2022

[root@ceph01 ~]# ceph osd crush tunables hammer
adjusted tunables profile to hammer

[root@ceph01 ~]# ceph osd crush reweight-all
reweighted crush hierarchy

# 由于关闭一些内核默认不支持的特性
[root@ceph01 ~]# rbd feature disable rbd1 exclusive-lock object-map fast-diff deep-flatten

# 查看特性是否已禁用
[root@ceph01 ~]# rbd --image rbd1 info | grep features
        features: layering
        op_features: 

2.客户端挂载rbd

#copy认证文件到客户端
[root@ceph01 ceph]# scp /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 192.168.3.40:/etc/ceph

# 映射到客户端(在需要挂载的客户端运行)
[root@cephtest ~]# rbd map --image rbd1
/dev/rbd0

# 查看映射情况
[root@cephtest ~]# rbd showmapped 
id  pool  namespace  image  snap  device   
0   rbd              rbd1   -     /dev/rbd0

# 格式化
[root@cephtest ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=16, agsize=163840 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.

# 创建挂载目录, 并将 rbd 挂载到指定目录
[root@cephtest ~]# mkdir /mnt/rbd
[root@cephtest ~]# mount /dev/rbd0 /mnt/rbd/

# 查看挂载情况
[root@cephtest ~]# df -hl | grep rbd
/dev/rbd0                    10G  105M  9.9G    2% /mnt/rbd

九、部署 NFS ganesha
1.配置nfs服务

#启动 nfs 服务
[root@ceph01 ~]# ceph mgr module enable nfs
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
module 'nfs' is already enabled

# 创建 NFS 所需的存储池。
[root@ceph01 ~]# ceph osd pool create mynfs_data
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
pool 'mynfs_data' created

#在存储池上开启 nfs 应用
[root@ceph01 ~]# ceph osd pool application enable mynfs_data nfs
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
enabled application 'nfs' on pool 'mynfs_data'

#创建集群, 使用 ceph nfs cluster create 命令,使用虚拟IP 192.168.3.50
[root@ceph01 ~]# ceph nfs cluster create mynfs 3 --ingress --virtual_ip 192.168.3.50
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
NFS Cluster Created Successfully

# 查看nfs集群
[root@ceph01 ~]# ceph nfs cluster ls
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
mynfs

2.新建 CephFS 的 nfs 导出

#也可以在 Ceph Dashboard 中新建 NFS,但在 Dashboard 中和 CLI 中的新建的,查看最后参数不一样,Dashboard 的多一些。
#使用的是 ceph fs 命令该命令会自动创建相应的存储池
[root@ceph01 ~]# ceph fs volume create myfs --placement=3
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac

#新建 CephFS 的 nfs 导出
[root@ceph01 ~]# ceph nfs export create cephfs --cluster-id mynfs --pseudo-path /data --fsname myfs --squash no_root_squash
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
{
    "bind": "/data",
    "fs": "myfs",
    "path": "/",
    "cluster": "mynfs",
    "mode": "RW"
}

#查看 nfs 信息
[root@ceph01 ~]# ceph nfs export info mynfs /data
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
{
  "export_id": 1,
  "path": "/",
  "cluster_id": "mynfs",
  "pseudo": "/data",
  "access_type": "RW",
  "squash": "no_root_squash",
  "security_label": true,
  "protocols": [
    4
  ],
  "transports": [
    "TCP"
  ],
  "fsal": {
    "name": "CEPH",
    "user_id": "nfs.mynfs.1",
    "fs_name": "myfs"
  },
  "clients": []
}

3.客户端配置

# 注意:只支持 NFS v4.0+ 的协议
# 挂载的命令
mount -t nfs -o nfsvers=4.1,proto=tcp <ganesha-host-name>:<ganesha-pseudo-path> <mount-point>

# 安装nfs客户端软件
yum install nfs-util

# 客户端就将存储中的 /data 挂载到本地 /mnt 目录
[root@cephtest ~]# mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.3.50:/data /mnt

# 卸掉挂载
[root@cephtest ~]# umount /mnt

十、查看prometheus监控信息
访问grafana:https://192.168.3.51:3000/
访问Dashboard:https://192.168.3.51:8443/

附、常用指令
1.显示当前的orchestrator模式和高级状态

[root@ceph01 ~]# ceph orch status
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
Backend: cephadm
Available: Yes
Paused: No

2.显示集群内的主机

[root@ceph01 ~]# ceph orch host ls
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
HOST    ADDR          LABELS  STATUS  
ceph01  192.168.3.51  _admin          
ceph02  192.168.3.52                  
ceph03  192.168.3.53                  
ceph04  192.168.3.54                  
ceph05  192.168.3.55                  
ceph06  192.168.3.56                  
6 hosts in cluster

3.添加/移除主机

ceph orch host add <hostname> [<addr>] [<labels>...]
ceph orch host rm <hostname>

也可以通过使用yaml文件,通过 ceph orch apply -i

---
service_type: host
addr: node-00
hostname: node-00
labels:
- example1
- example2
---
service_type: host
addr: node-01
hostname: node-01
labels:
- grafana
---
service_type: host
addr: node-02
hostname: node-02

4.显示发现的设备

[root@ceph01 ~]# ceph orch device ls
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
HOST    PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REJECT REASONS                                                 
ceph04  /dev/vdb  hdd              10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph04  /dev/vdc  hdd              10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph05  /dev/vdb  hdd              10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph05  /dev/vdc  hdd              10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph06  /dev/vdb  hdd              10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  
ceph06  /dev/vdc  hdd              10.7G             Insufficient space (<10 extents) on vgs, LVM detected, locked  

5.创建osd

ceph orch daemon add osd :device1,device2

6.移除osd

ceph orch osd rm <svc_id>... [--replace] [--force]
例如
ceph orch osd rm 4

7.查看service状态

[root@ceph01 ~]# ceph orch ls
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
NAME               PORTS                   RUNNING  REFRESHED  AGE  PLACEMENT                     
alertmanager       ?:9093,9094                 1/1  7m ago     2h   count:1                       
crash                                          6/6  10m ago    2h   *                             
grafana            ?:3000                      1/1  7m ago     2h   count:1                       
ingress.nfs.mynfs  192.168.3.50:2049,9049      4/4  7m ago     25m  count:2                       
mds.myfs                                       3/3  7m ago     18m  count:3                       
mgr                                            3/3  7m ago     2h   ceph01;ceph02;ceph03          
mon                                            3/3  7m ago     2h   ceph01;ceph02;ceph03          
nfs.mynfs          ?:12049                     3/3  7m ago     25m  count:3                       
node-exporter      ?:9100                      6/6  10m ago    2h   *                             
osd                                              6  10m ago    -    <unmanaged>                   
prometheus         ?:9095                      1/1  7m ago     2h   count:1                       
rgw.myorg          ?:80                        3/3  7m ago     57m  ceph01;ceph02;ceph03;count:3  

8.查看daemon 状态

[root@ceph01 ~]# ceph orch ps
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
NAME                                HOST    PORTS        STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION         IMAGE ID      CONTAINER ID  
alertmanager.ceph01                 ceph01  *:9093,9094  running (117m)     8m ago    2h    28.2M        -                  ba2b418f427c  1bc8e76f28f5  
crash.ceph01                        ceph01               running (2h)       8m ago    2h    6664k        -  17.2.0          132aa60d26c9  e2e285e57c43  
crash.ceph02                        ceph02               running (2h)       8m ago    2h    8995k        -  17.2.0          132aa60d26c9  9dd853dbcd3e  
crash.ceph03                        ceph03               running (2h)       8m ago    2h    7411k        -  17.2.0          132aa60d26c9  c812c23ffa32  
crash.ceph04                        ceph04               running (2h)      11s ago    2h    6800k        -  17.2.0          132aa60d26c9  ab78e5868202  
crash.ceph05                        ceph05               running (2h)      11s ago    2h    6811k        -  17.2.0          132aa60d26c9  66c976b03bd0  
crash.ceph06                        ceph06               running (117m)    11s ago  117m    6804k        -  17.2.0          132aa60d26c9  5756e187bac2  
grafana.ceph01                      ceph01  *:3000       running (2h)       8m ago    2h    53.9M        -  8.3.5           dad864ee21e9  9097527a2e1b  
haproxy.nfs.mynfs.ceph01.umjhnp     ceph01  *:2049,9049  running (24m)      8m ago   25m    3619k        -  2.3.20-2c8082e  0ea9253dad7c  bebe6fd8c68b  
haproxy.nfs.mynfs.ceph02.qspwsy     ceph02  *:2049,9049  running (24m)      8m ago   25m    3607k        -  2.3.20-2c8082e  0ea9253dad7c  21f41b23d513  
keepalived.nfs.mynfs.ceph01.biwcel  ceph01               running (24m)      8m ago   24m     924k        -  2.0.5           073e0c3cd1b9  0e7d15c74ae2  
keepalived.nfs.mynfs.ceph02.xyezin  ceph02               running (24m)      8m ago   24m     916k        -  2.0.5           073e0c3cd1b9  811e37cdf33a  
mds.myfs.ceph01.sbhrlq              ceph01               running (18m)      8m ago   18m    23.3M        -  17.2.0          132aa60d26c9  559bedfe9c2c  
mds.myfs.ceph02.djpown              ceph02               running (18m)      8m ago   18m    14.3M        -  17.2.0          132aa60d26c9  6248522be1d8  
mds.myfs.ceph03.wperpr              ceph03               running (18m)      8m ago   18m    14.5M        -  17.2.0          132aa60d26c9  3fb421029e24  
mgr.ceph01.qeypru                   ceph01  *:9283       running (2h)       8m ago    2h     501M        -  17.2.0          132aa60d26c9  fa2b1539cbef  
mgr.ceph02.qfjxfj                   ceph02  *:8443,9283  running (2h)       8m ago    2h     408M        -  17.2.0          132aa60d26c9  3d999edd88f4  
mgr.ceph03.twzhrp                   ceph03  *:8443,9283  running (117m)     8m ago  117m     410M        -  17.2.0          132aa60d26c9  007a1a8a57c7  
mon.ceph01                          ceph01               running (2h)       8m ago    2h     108M    2048M  17.2.0          132aa60d26c9  17549b0c2b1e  
mon.ceph02                          ceph02               running (2h)       8m ago    2h    95.0M    2048M  17.2.0          132aa60d26c9  42c175c16b8a  
mon.ceph03                          ceph03               running (2h)       8m ago    2h    90.7M    2048M  17.2.0          132aa60d26c9  c5aca83b6af0  
nfs.mynfs.0.0.ceph01.biphal         ceph01  *:12049      running (25m)      8m ago   25m    82.7M        -  4.0             132aa60d26c9  ba1e0d341b02  
nfs.mynfs.1.0.ceph02.kkohpy         ceph02  *:12049      running (25m)      8m ago   25m    82.0M        -  4.0             132aa60d26c9  d58c3adf4d00  
nfs.mynfs.2.0.ceph03.jamdsw         ceph03  *:12049      running (25m)      8m ago   25m    78.3M        -  4.0             132aa60d26c9  534f5c92bcd0  
node-exporter.ceph01                ceph01  *:9100       running (2h)       8m ago    2h    23.0M        -                  1dbe0e931976  cc75349bd73a  
node-exporter.ceph02                ceph02  *:9100       running (2h)       8m ago    2h    21.4M        -                  1dbe0e931976  6790b35f1b9b  
node-exporter.ceph03                ceph03  *:9100       running (2h)       8m ago    2h    21.6M        -                  1dbe0e931976  f41f6dad69ee  
node-exporter.ceph04                ceph04  *:9100       running (2h)      11s ago    2h    23.6M        -                  1dbe0e931976  d99ad83e5a18  
node-exporter.ceph05                ceph05  *:9100       running (2h)      11s ago    2h    21.6M        -                  1dbe0e931976  fa1529534d2f  
node-exporter.ceph06                ceph06  *:9100       running (117m)    11s ago  117m    21.4M        -                  1dbe0e931976  cc7c744e3599  
osd.0                               ceph04               running (119m)    11s ago  119m    97.5M    4096M  17.2.0          132aa60d26c9  140b91aa5e9d  
osd.1                               ceph04               running (118m)    11s ago  118m     102M    4096M  17.2.0          132aa60d26c9  06d00ba57291  
osd.2                               ceph05               running (118m)    11s ago  118m     101M    4096M  17.2.0          132aa60d26c9  30bb663791d4  
osd.3                               ceph05               running (117m)    11s ago  117m    96.2M    4096M  17.2.0          132aa60d26c9  cc943773fa16  
osd.4                               ceph06               running (117m)    11s ago  117m     102M    4096M  17.2.0          132aa60d26c9  1964a02fbcd0  
osd.5                               ceph06               running (116m)    11s ago  116m    98.5M    4096M  17.2.0          132aa60d26c9  581906aa853b  
prometheus.ceph01                   ceph01  *:9095       running (24m)      8m ago    2h    99.9M        -                  514e6a882f6e  37b281620531  
rgw.myorg.ceph01.sshton             ceph01  *:80         running (58m)      8m ago   58m     124M        -  17.2.0          132aa60d26c9  815f60287dbf  
rgw.myorg.ceph02.xyivtz             ceph02  *:80         running (58m)      8m ago   58m     102M        -  17.2.0          132aa60d26c9  e792b4c6b480  
rgw.myorg.ceph03.zoxgbu             ceph03  *:80         running (58m)      8m ago   58m     101M        -  17.2.0          132aa60d26c9  4b538b7286da  

9.查看指定的daemon状态

[root@ceph01 ~]# ceph orch ps --daemon_type osd --daemon_id 0
Inferring fsid 89f1e29e-e46c-11ec-a0d7-525400062554
Using recent ceph image quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
NAME   HOST    PORTS  STATUS        REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
osd.0  ceph04         running (2h)    48s ago   2h    97.5M    4096M  17.2.0   132aa60d26c9  140b91aa5e9d  

10.service/daemon 的start/stop/reload

ceph orch service {stop,start,reload} <type> <name>
ceph orch daemon {start,stop,reload} <type> <daemon-id>

11.访问dashboard界面
  在部署完成时,其实是可以看到一个可以通过 hostip:8443访问dashboard的地址。并且提供了用户名和密码。如果忘记了,可以通过重新设置admin的密码进行修改。

# 登录 Cephadm shell
cephadm shell

# 创建 dashboard_password.yml 文件
touch dashboard_password.yml

# 编辑file 输入新 dashboard 密码
vi dashboard_password.yml

#执行修改密码命令
[root@ceph-admin ~]# ceph dashboard ac-user-set-password admin -i dashboard_password.yml

12.当部署出现问题可以执行以下命令查看详细信息。

ceph log last cephadm

13.也可以直接查看Service级别或Daemon级别的日志

ceph orch ls --service_name=alertmanager --format yaml
ceph orch ps --service-name <service-name> --daemon-id <daemon-id> --format yaml
  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
要在Rocky Linux上安装Spring Boot,您需要先配置yum源为阿里云,并更新缓存。可以使用以下命令更改yum源为阿里云并更新缓存: ``` sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \ -i.bak \ /etc/yum.repos.d/Rocky-*.repo dnf makecache ``` 接下来,您需要安装Redis和Nginx,您可以根据需要选择单机版或集群版配置。以下是示例配置: 单机版配置: ``` #配置Redis的配置信息(单机版) #spring.redis.host=192.168.188.128 #spring.redis.port=6379 #spring.redis.password=123456 ``` 集群版配置: ``` #配置Redis的配置信息(集群版,连接哨兵集群) #指定所有哨兵服务的IP 端口多个地址使用逗号分隔 spring.redis.sentinel.nodes=192.168.19.129:26380,192.168.19.129:26382,192.168.19.129:26384 #指定集群名称取值来自哨兵配置文件 spring.redis.sentinel.master=mymaster #指定Redis的访问密码 spring.redis.password=123456 ``` 最后,您需要关闭SELinux和防火墙: ``` sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config systemctl disable firewalld systemctl stop firewalld ``` 以上是在Rocky Linux上安装Spring Boot的一般步骤。请确保按照您的实际需求进行适当的配置和安装。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [cephadm部署ceph-quincy版(rocky8.6)](https://blog.csdn.net/w975121565/article/details/126770631)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *2* [Linux和部署在Linux上的相关软件](https://blog.csdn.net/Ssucre/article/details/115542747)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值