K8S章节2 — k8s集群中通过rook方式部署ceph

104 篇文章 135 订阅

1、k8s部署

参考:kubernetes简介及单master集群搭建

部署完成后如下:

hostnameIPADDR
k8s-master192.168.1.11
k8s-node01192.168.1.12
k8s-node02192.168.1.13
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   9d    v1.15.0
k8s-node01   Ready    <none>   9d    v1.15.0
k8s-node02   Ready    <none>   9d    v1.15.0

2、rook环境/工具准备(所有node)

2.1 、确保所有时间同步

2.2、安装git

yum install git

2.3、安装lvm2

yum -y install lvm2

2.4、启用rbd模块

# 加载rbd模块
modprobe rbd
# 创建系统启动时自动加载模块脚本
cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules
do
  [ -x \$file ] && \$file
done
EOF
# 创建一个 rbd_modules
cat > /etc/sysconfig/modules/rbd.modules << EOF
modprobe rbd
EOF

chmod 755 /etc/sysconfig/modules/rbd.modules
lsmod |grep rbd

2.5、查看系统版本内核,过低则升级,升级后需重启系统

参考网站:The Community Enterprise Linux Repository

2.6、ceph存储在每个节点要挂载第二块磁盘,在各node上均添加一块磁盘sdb 分区(我用的是VMware ,由于电脑配置问题,只能开三台虚拟机)

# VMware上选定虚拟机——右键——设置——添加——硬盘
# 此时通过lsblk命令查看发现并没有sdb分区
# 扫描 SCSI总线并添加 SCSI 设备
 for host in $(ls /sys/class/scsi_host) ; do echo "- - -" > /sys/class/scsi_host/$host/scan; done
# 重新扫描 SCSI 总线
 for scsi_device in $(ls /sys/class/scsi_device/); do echo 1 > /sys/class/scsi_device/$scsi_device/device/rescan; done
# 重新lsblk查看,发现存在sdb分区
[root@k8s-master ~]# lsblk
NAME                                                                                                 MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb                                                                                                    8:16   0   50G  0 disk 
sr0                                                                                                   11:0    1  4.5G  0 rom  
sda                                                                                                    8:0    0   30G  0 disk 
├─sda2                                                                                                 8:2    0   29G  0 part 
│ ├─centos-swap                                                                                      253:1    0    2G  0 lvm  
│ └─centos-root                                                                                      253:0    0   27G  0 lvm  /
└─sda1                                                                                                 8:1    0    1G  0 part /boot

以上环境准备完成,建议在虚拟机上生成快照。后续踩坑清楚环境麻烦,而且重新安装会因上次安装后的清除不干净导致各种bug。

3、rook部署ceph集群

3.1、踩坑前序

作为一个小白,安装前先找了网上很多教程,由于国内外网络环境问题,安装起来各种坑。网上大多正常流程是:

 # 安装rook
 git clone https://github.com/rook/rook.git
 cd rook/cluster/examples/kubernetes/ceph/
 # 安装operator准备环境
 kubectl apply -f common.yaml    ##很多教程把这步都省略了,现在省略这步会直接报错没有rook-ceph文件目录
 kubectl apply -f operator.yaml
 # 查看创建状态
 kubectl get pod -n rook-ceph  -o wide
 # 安装cluster.yml,此处需要修改cluster.yaml文件内容
 kubectl apply –f cluster.yml

3.2、ceph正确打开方式

3.2.1、先拉取ceph所需images,并将镜像做tag

# 各个节点均拉取镜像
docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v1.2.6
docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v14.2.8
docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-node-driver-registrar:v1.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-provisioner:v1.4.0
docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-attacher:v1.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-snapshotter:v1.2.2
docker pull registry.cn-hangzhou.aliyuncs.com/vinc-auto/cephcsi:v1.2.2
# 需要手动将镜像做tag
docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-node-driver-registrar:v1.2.0 quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-provisioner:v1.4.0 quay.io/k8scsi/csi-provisioner:v1.4.0
docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-attacher:v1.2.0 quay.io/k8scsi/csi-attacher:v1.2.0
docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/csi-snapshotter:v1.2.2 quay.io/k8scsi/csi-snapshotter:v1.2.2
docker tag registry.cn-hangzhou.aliyuncs.com/vinc-auto/cephcsi:v1.2.2 quay.io/cephcsi/cephcsi:v1.2.2

3.2.2、修改master节点,ceph使用三节点时,需要开通k8s master节点的容忍,因为master节点默认会有污点,mon 、osd 等pod不会自动部署到这个节点。

# 修改master节点,使其能够创建pod
kubectl get no -o yaml | grep taint -A 5
kubectl taint nodes --all node-role.kubernetes.io/master-

3.2.3、rook-ceph部分参数介绍

1

Rook:一个自我管理的分布式存储编排系统,它本身并不是存储系统,在存储和k8s之前搭建了一个桥梁,存储系统的搭建或者维护变得特别简单,Rook支持CSI,CSI做一些PVC的快照、PVC扩容等操作。
Operator:主要用于有状态的服务,或者用于比较复杂应用的管理。
Helm:主要用于无状态的服务,配置分离。

Rook:

Agent:在每个存储节点上运行,用于配置一个FlexVolume插件,和k8s的存储卷进行集成。挂载网络存储、加载存储卷、格式化文件系统。
Discover:主要用于检测链接到存储节点上的存储设备。

Ceph:

OSD:直接连接每一个集群节点的物理磁盘或者是目录。集群的副本数、高可用性和容错性。
MON:集群监控,所有集群的节点都会向Mon汇报,记录了集群的拓扑以及数据存储位置的信息。
MDS:元数据服务器,负责跟踪文件层次结构并存储ceph元数据。
RGW:restful API接口。
MGR:提供额外的监控和界面。

Rook官方文档:https://rook.io/docs/rook

OSD配置:osd配置介绍

3.2.4、安装rook(master上)

安装指定版本的rook,本文使用1.2版本rook。

 # 选择版本安装rook
git clone --single-branch --branch release-1.2 https://github.com/rook/rook.git
cd rook/
git status  #查看rook版本

3.2.5、安装ceph集群(master上)

common.yml 、operator.yml不需要修改,直接创建安装。

# 进入ceph配置文件目录
cd rook/cluster/examples/kubernetes/ceph/
# 运行commom.yml 
kubectl apply -f common.yaml   # 或者 kubectl create -f common.yaml
# 运行operator.yaml 
kubectl apply -f operator.yaml  # 或者 kubectl create -f operator.yaml
# 查看创建状态,各节点分别一个rook-discover状态running,一个rook-ceph-operator状态running
[root@k8s-master ceph]# kubectl -n rook-ceph get pod -o wide
NAME                                                   READY   STATUS      RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
rook-ceph-operator-dcd49fbfd-w8wzw                     1/1     Running     25         21h     10.244.2.198   k8s-node02   <none>           <none>
rook-discover-htzcf                                    1/1     Running     3          21h     10.244.2.195   k8s-node02   <none>           <none>
rook-discover-j9gcd                                    1/1     Running     3          21h     10.244.0.48    k8s-master   <none>           <none>
rook-discover-vlcrs                                    1/1     Running     3          21h     10.244.1.52    k8s-node01   <none>           <none>

修改cluster.yml文件,需要修改以下storage部分,需要注意格式对齐。

# 先运行以下命令修改image
sed -i 's|ceph/ceph:v14.2.8|registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v14.2.8|g' cluster.yaml
#接着用vi修改以下######之间部分,#####仅为标注作用,实际编写过程中去掉即可

#####################################################################
  storage: # cluster level storage configuration and selection
    useAllNodes: false
    useAllDevices: false
    #deviceFilter:
    config:
      metadataDevice:
      databaseSizeMB: "1024"
      journalSizeMB:  "1024"*
######################################################################
      # The default and recommended storeType is dynamically set to bluestore for devices and filestore for directories.
      # Set the storeType explicitly only if it is required not to use the default.
      # storeType: bluestore
      # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
      # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
      # journalSizeMB: "1024"  # uncomment if the disks are 20 GB or smaller
      # osdsPerDevice: "1" # this value can be overridden at the node or device level
      # encryptedDevice: "true" # the default value for this option is "false"
# Cluster level list of directories to use for filestore-based OSD storage. If uncomment, this example would create an OSD under the dataDirHostPath.
    #directories:
    #- path: /var/lib/rook
# Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named
# nodes below will be used as storage resources.  Each node's 'name' field should match their 'kubernetes.io/hostname' label.
#######################################################################
    nodes:
    - name: "k8s-master" 
      devices:
      - name: "sdb"
      config:
        storeType: bluestore
    - name: "k8s-node01"
      devices:
      - name: "sdb"
      config:
        storeType: bluestore
    - name: "k8s-node02"
      devices:
      - name: "sdb"
      config:
        storeType: bluestore
######################################################################
#    nodes:
#    - name: "172.17.4.101"
#      directories: # specific directories to use for storage can be specified for each node
#      - path: "/rook/storage-dir"
#      resources:
#        limits:
#          cpu: "500m"
#          memory: "1024Mi"
#        requests:
#          cpu: "500m"
#          memory: "1024Mi"
#    - name: "172.17.4.201"
#      devices: # specific devices to use for storage can be specified for each node
#      - name: "sdb"
#      - name: "nvme01" # multiple osds can be created on high performance devices
#        config:
#          osdsPerDevice: "5"
#      config: # configuration can be specified at the node level which overrides the cluster level config
#        storeType: filestore
#    - name: "172.17.4.301"
#      deviceFilter: "^sd."

运行cluster.yml

kubectl apply -f cluster.yaml  # 或者 kubectl create -f cluster.yaml
# 查看创建状态,会有一系列csi,mon会先running,当出现master、node01、node02均出现rook-ceph-osd  running时即安装成功,该过程所需时间较长
[root@k8s-master ceph]# kubectl -n rook-ceph get pod -o wide
NAME                                                   READY   STATUS      RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
csi-cephfsplugin-kblzc                                 3/3     Running     10         20h   192.168.1.11   k8s-master   <none>           <none>
csi-cephfsplugin-l8r2l                                 3/3     Running     12         20h   192.168.1.12   k8s-node01   <none>           <none>
csi-cephfsplugin-provisioner-84fcf498dd-9p4gp          4/4     Running     24         20h   10.244.2.192   k8s-node02   <none>           <none>
csi-cephfsplugin-provisioner-84fcf498dd-l8fk2          4/4     Running     56         20h   10.244.0.47    k8s-master   <none>           <none>
csi-cephfsplugin-wzwsn                                 3/3     Running     10         20h   192.168.1.13   k8s-node02   <none>           <none>
csi-rbdplugin-bd96h                                    3/3     Running     10         20h   192.168.1.13   k8s-node02   <none>           <none>
csi-rbdplugin-p7lrx                                    3/3     Running     11         20h   192.168.1.11   k8s-master   <none>           <none>
csi-rbdplugin-provisioner-7997bbf8b5-dk9l7             5/5     Running     35         20h   10.244.2.191   k8s-node02   <none>           <none>
csi-rbdplugin-provisioner-7997bbf8b5-svdsz             5/5     Running     84         20h   10.244.0.49    k8s-master   <none>           <none>
csi-rbdplugin-xpgfd                                    3/3     Running     12         20h   192.168.1.12   k8s-node01   <none>           <none>
rook-ceph-crashcollector-k8s-master-6cbf8d8db7-h29sv   1/1     Running     3          20h   10.244.0.50    k8s-master   <none>           <none>
rook-ceph-crashcollector-k8s-node01-847db48ccd-bc6s8   1/1     Running     3          20h   10.244.1.55    k8s-node01   <none>           <none>
rook-ceph-crashcollector-k8s-node02-5bf86dfddc-j4fzq   1/1     Running     3          20h   10.244.2.193   k8s-node02   <none>           <none>
rook-ceph-mgr-a-894d9d88d-n8gfv                        1/1     Running     17         19h   10.244.2.194   k8s-node02   <none>           <none>
rook-ceph-mon-a-8bfd68c9d-gq4rt                        1/1     Running     3          20h   10.244.1.56    k8s-node01   <none>           <none>
rook-ceph-mon-b-7ff677b976-6cvd7                       1/1     Running     3          20h   10.244.2.197   k8s-node02   <none>           <none>
rook-ceph-mon-c-786f6bf9df-9xr62                       1/1     Running     3          20h   10.244.0.52    k8s-master   <none>           <none>
rook-ceph-operator-dcd49fbfd-w8wzw                     1/1     Running     25         21h   10.244.2.198   k8s-node02   <none>           <none>
rook-ceph-osd-0-75bf644b84-9wgn7                       1/1     Running     3          20h   10.244.1.54    k8s-node01   <none>           <none>
rook-ceph-osd-1-6c5745cfd-h9v45                        1/1     Running     3          20h   10.244.0.51    k8s-master   <none>           <none>
rook-ceph-osd-2-7b8557677f-p7tng                       1/1     Running     3          20h   10.244.2.190   k8s-node02   <none>           <none>
rook-ceph-osd-prepare-k8s-master-gfwjl                 0/1     Completed   0          45m   10.244.0.53    k8s-master   <none>           <none>
rook-ceph-osd-prepare-k8s-node01-lzs75                 0/1     Completed   0          45m   10.244.1.58    k8s-node01   <none>           <none>
rook-ceph-osd-prepare-k8s-node02-9cp82                 0/1     Completed   0          45m   10.244.2.199   k8s-node02   <none>           <none>
rook-discover-htzcf                                    1/1     Running     3          21h   10.244.2.195   k8s-node02   <none>           <none>
rook-discover-j9gcd                                    1/1     Running     3          21h   10.244.0.48    k8s-master   <none>           <none>
rook-discover-vlcrs                                    1/1     Running     3          21h   10.244.1.52    k8s-node01   <none>           <none>

安装ceph工具

# rook-ceph-tool 是个控制工具,可用于手动部署和维护 Ceph 集群。它提供的多种工具可用于部署监视器、 OSD 、归置组、 MDS 和维护、管理整个集群。
 kubectl apply -f toolbox.yaml
# 查看tool安装成功,rook-ceph-tools-7d7476bcc7-g78cb 状态为 Running即表示安装成功。
kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"
# 使用工具
[root@k8s-master ceph]# kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') sh
sh-4.2# 
sh-4.2# ceph status
  cluster:
    id:     6e47c296-5d48-4bd7-821f-93e9854c8f95
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 63m)
    mgr: a(active, since 62m)
    osd: 3 osds: 3 up (since 63m), 3 in (since 20h)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 147 GiB / 150 GiB avail
    pgs:     
 
sh-4.2# ceph osd status
+----+------------+-------+-------+--------+---------+--------+---------+-----------+
| id |    host    |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | k8s-node01 | 1028M | 48.9G |    0   |     0   |    0   |     0   | exists,up |
| 1  | k8s-master | 1028M | 48.9G |    0   |     0   |    0   |     0   | exists,up |
| 2  | k8s-node02 | 1028M | 48.9G |    0   |     0   |    0   |     0   | exists,up |
+----+------------+-------+-------+--------+---------+--------+---------+-----------+
sh-4.2# ceph mon status   
no valid command found; 10 closest matches:
mon versions
mon count-metadata <property>
mon metadata {<id>}
mon sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
mon scrub
mon compact
mon ok-to-rm <id>
mon ok-to-stop <ids> [<ids>...]
mon ok-to-add-offline
mon dump {<int[0-]>}
Error EINVAL: invalid command
sh-4.2# exit
exit
command terminated with exit code 22
[root@k8s-master ceph]# 

更多rook-ceph-tool使用命令参考:ceph-tool更多命令
配置并登陆 Ceph Dashboard

# Ceph集群配置文件中已经开启了dashboard,但是需要配置后才能进行登陆,先查看dashboard服务
[root@k8s-master ~]#  kubectl -n rook-ceph get service
NAME                                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
csi-cephfsplugin-metrics                 ClusterIP   10.1.150.121   <none>        8080/TCP,8081/TCP   21h
csi-rbdplugin-metrics                    ClusterIP   10.1.35.83     <none>        8080/TCP,8081/TCP   21h
rook-ceph-mgr                            ClusterIP   10.1.4.230     <none>        9283/TCP            20h
rook-ceph-mgr-dashboard                  ClusterIP   10.1.167.197   <none>        8443/TCP            20h
rook-ceph-mon-a                          ClusterIP   10.1.178.166   <none>        6789/TCP,3300/TCP   20h
rook-ceph-mon-b                          ClusterIP   10.1.242.52    <none>        6789/TCP,3300/TCP   20h
rook-ceph-mon-c                          ClusterIP   10.1.251.216   <none>        6789/TCP,3300/TCP   20
# 将type由ClusterIP改为NodePort
kubectl edit service rook-ceph-mgr-dashboard -n rook-ceph
# 命令与vim类似,但是在保存是出错。
spec:
  NodePort: 10.1.167.197
  ports:
  - name: https-dashboard
    port: 8443
    protocol: TCP
    targetPort: 8443
  selector:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
~
"/tmp/kubectl-edit-406pn.yaml" 39L, 1175C written
A copy of your changes has been stored to "/tmp/kubectl-edit-406pn.yaml"
error: Edit cancelled, no valid changes were saved.
# 换个思路,重新安装一个https的dashboard(不做修改,直接安装)
kubectl create -f dashboard-external-https.yaml
# 再次查看服务,并且能发现暴露的port为:30662
[root@k8s-master ~]# kubectl -n rook-ceph get service
NAME                                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
csi-cephfsplugin-metrics                 ClusterIP   10.1.150.121   <none>        8080/TCP,8081/TCP   21h
csi-rbdplugin-metrics                    ClusterIP   10.1.35.83     <none>        8080/TCP,8081/TCP   21h
rook-ceph-mgr                            ClusterIP   10.1.4.230     <none>        9283/TCP            20h
rook-ceph-mgr-dashboard                  ClusterIP   10.1.167.197   <none>        8443/TCP            20h
rook-ceph-mgr-dashboard-external-https   NodePort    10.1.98.152    <none>        8443:30662/TCP      17h
rook-ceph-mon-a                          ClusterIP   10.1.178.166   <none>        6789/TCP,3300/TCP   21h
rook-ceph-mon-b                          ClusterIP   10.1.242.52    <none>        6789/TCP,3300/TCP   21h
rook-ceph-mon-c                          ClusterIP   10.1.251.216   <none>        6789/TCP,3300/TCP   21h
# 获取登陆密码,https协议访问集群node相应的端口,登陆用户名为admin
Ciphertext=$(kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}")
Pass=$(echo ${Ciphertext}|base64 --decode)
echo ${Pass}

浏览器访问:https://192.168.1.12:30662

1
至此,k8s通过rook方式安装ceph已经完成。

排错命令

# 安装过程中出错可用以下命令查找故障原因
kubectl describe pod name ## 比如 rook-ceph-operator-dcd49fbfd-w8wzw   
kubectl logs name ## 比如 csi-rbdplugin-provisioner-7997bbf8b5-svdsz 
kubectl get crd |grep ceph  ## 查看资源自定义文件

参考文章:Rook部署Ceph存储集群

转载至https://blog.csdn.net/qq_41798254/article/details/108976415?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162834953616780269826009%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=162834953616780269826009&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_v2~rank_v29-13-108976415.first_rank_v2_pc_rank_v29&utm_term=k8s+ceph+%E9%83%A8%E7%BD%B2&spm=1018.2226.3001.4187

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要在Kubernetes上部署Ceph集群,可以按照以下步骤进行操作: 1. 首先,通过修改configmap文件来补全配置,以便连接到Ceph集群资源文件。configmap文件需要包含正确的镜像地址和其他必要的配置信息。 2. 接下来,需要准备好Ceph集群的配置文件。可以根据需要选择合适的配置文件,如生产存储集群配置(cluster.yaml),测试集群配置(cluster-test.yaml)或仅包含一个ceph-mon和一个ceph-mgr的最小配置(cluster-minimal.yaml)。可以根据实际需求进行配置文件的修改和替换。 3. 使用sed命令修改集群配置文件的镜像地址、节点选择和设备选择等参数。例如,可以使用以下命令将镜像地址替换为指定的地址: ``` sed -i 's|ceph/ceph:v14.2.9|registry.cn-hangzhou.aliyuncs.com/vinc-auto/ceph:v14.2.8|g' cluster.yaml ``` 同样地,可以使用sed命令关闭所有节点和所有设备选择,并手动指定需要的节点和设备: ``` sed -i 's|useAllNodes: true|useAllNodes: false|g' cluster.yaml sed -i 's|useAllDevices: true|useAllDevices: false|g' cluster.yaml ``` 4. 最后,使用Kubernetes的部署机制来部署Ceph集群节点。可以使用相应的命令或配置文件来执行节点部署操作。根据实际需求,选择合适的部署方式和节点数量,并确保节点配置正确。 通过以上步骤,可以在Kubernetes上成功部署Ceph集群,并开始使用Ceph的块存储系统。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [K8s-ceph-csi-rbd连接资源](https://download.csdn.net/download/qq_37382917/85652308)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [k8s挂载使用ceph集群](https://blog.csdn.net/m0_64417032/article/details/124914570)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [k8s——kubernetes使用rook部署ceph集群](https://blog.csdn.net/vic_qxz/article/details/119513151)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值