k8s的ceph

ceph安装
地址:https://rook.io/docs/rook/v1.8/quickstart.html

特性丰富
1,支持三种存储接口:块存储、文件存储、对象存储。
2,支持自定义接口,支持多种语言驱动。

基本概念
Ceph OSD
Object Storage Device是ceph的核心组件,用于存储数据,处理数据的复制、恢复、回填、再均衡,并通过检查其他OSD守护进程的心跳来向Ceph Monitors提供一些监控信息。ceph集群每台机器上的每块盘需要运行一个OSD进程。

每个OSD有自己日志,生产环境一般要求OSD的journal日志单独放在ssd上,数据对象放在机械硬盘,以提升性能。而OSD使用日志的原因有二:速度和一致性。
1,速度: 日志使得OSD可以快速地提交小块数据的写入,Ceph把小片、随机IO依次写入日志,这样,后端文件系统就有可能归并写入动作,并最终提升并发承载力。因此,使用OSD日志能展现出优秀的突发写性能,实际上数据还没有写入OSD,因为文件系统把它们捕捉到了日志。
2,一致性: Ceph的OSD守护进程需要一个能保证原子操作的文件系统接口。OSD把一个操作的描述写入日志,然后把操作应用到文件系统,这需要原子更新一个对象(例如归置组元数据)。每隔一段 filestore max sync interval 和 filestore min sync interval之间的时间,OSD停止写入、把日志同步到文件系统,这样允许 OSD 修整日志里的操作并重用空间。若失败, OSD 从上个同步点开始重放日志。

Monitor
Ceph Monitor维护着展示集群状态的各种图表,包括监视器图、 OSD图、归置组( PG )图、和 CRUSH 图。 Ceph 保存着发生在Monitors、OSD和PG上的每一次状态变更的历史信息(称为epoch )。Monitor支持高可用部署,可以运行多个Monitor组成一个集群,一个监视器集群确保了当某个监视器失效时的高可用性。存储集群客户端向Ceph监视器索取集群运行图的最新副本,而集群通过Paxos算法就集群当前状态保持一致。

MDS
Ceph Metadata Server为Ceph文件系统存储元数据(也就是说,Ceph 块设备和 Ceph 对象存储不使用MDS )。Metadata Server使得 POSIX 文件系统的用户们,可以在不对 Ceph 存储集群造成负担的前提下,执行诸如 ls、find 等基本命令。

CRUSH
CRUSH 是 Ceph 使用的数据分布算法,类似一致性哈希,让数据分配到预期的地方。Ceph 客户端和 OSD 守护进程都用 CRUSH 算法来计算对象的位置信息,而不是依赖于一个中心化的查询表。与以往方法相比, CRUSH 的数据管理机制更好,它很干脆地把工作分配给集群内的所有客户端和 OSD 来处理,因此具有极大的伸缩性。 CRUSH 用智能数据复制确保弹性,更能适应超大规模存储。

Object
Ceph 最底层的存储单元是 Object 对象,每个 Object 包含元数据和原始数据。

PG
PG 全称 Placement Grouops,是一个逻辑的概念,一个 PG 包含多个Object。引入PG这一层其实是为了更好的分配数据和定位数据。

存储pool
Ceph 存储系统支持“池”概念,它是存储对象的逻辑分区,集群部署起来之后会有一个rdb的默认存储池。一个存储池可以设置PG的数量,CRUSH规则集,访问控制等。

RADOS
RADOS 全称 Reliable Autonomic Distributed Object Store,是 Ceph 集群的核心,用户实现数据分配、Failover 等集群操作。

Libradio
Librados 是 RADOS 的接口库,因为 RADOS 是协议很难直接访问,因此上层的 RBD、RGW 和 CephFS 都是通过 librados 访问的。Ceph的客户端通过一套名为librados的接口进行集群的访问,这里的访问包括对集群的整体访问和对象的访问两类接口。这套接口(API)包括C、C++和Python常见语言的实现,接口通过网络实现对Ceph集群的访问。在用户层面,可以在自己的程序中调用该接口,从而集成Ceph集群的存储功能,或者在监控程序中实现对Ceph集群状态的监控。

RBD
RBD 全称 RADOS block device,是 Ceph 对外提供的块设备服务。Ceph块设备是精简配置的、大小可调且将数据条带化存储到集群内的多个 OSD 。 Ceph 块设备利用 RADOS 的多种能力,如快照、复制和一致性。 Ceph 的 RADOS 块设备( RBD )使用内核模块或 librbd 库与 OSD 交互。

RGW
RGW 全称 RADOS gateway,是 Ceph 对外提供的对象存储服务,接口与 S3 和 Swift 兼容。

CephFS
CephFS 全称 Ceph File System,是Ceph对外提供的文件系统服务。


一:k8s集群节点都可以新加一块没有格式化的磁盘,三个节点都新加一块磁盘20G。只是测试而已。
# 扫描 SCSI总线并添加 SCSI 设备
for host in $(ls /sys/class/scsi_host) ; do echo "- - -">/sys/class/scsi_host/$host/scan; done 

# 查看已添加的磁盘 sdb
[root@master01 ceph]# lsblk 
NAME                              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sdb                                8:16   0   20G  0   disk  
sr0                                11:0   1  4.5G  0   rom  
sda                                8:0    0  100G  0   disk 
├─sda2                             8:2    0   99G  0   part 
│ ├─centos-swap                    253:1  0    2G  0   lvm  
│ ├─centos-home                    253:2  0   47G  0   lvm   /home
│ └─centos-root                    253:0  0   50G  0   lvm   /
└─sda1                             8:1    0    1G  0   part  /boot
 

1:新磁盘清零,防止磁盘有数据导致pvc创建有问题
[root@master01 kubernetes]# dd if=/dev/zero of=/dev/sdb bs=1M status=progress

2,下载rook
https://github.com/rook/rook/tree/v1.5.5

3.解压,构建
[root@master01 ~]# cd rook/
[root@master01 rook]# unzip rook-1.5.5.zip
[root@master01 rook]# cd rook-1.5.5/cluster/examples/kubernetes/ceph/


#所有节点加载rdb模块
[root@master01 ~]# modprobe rbd
[root@master01 ~]# cat > /etc/sysconfig/modules/rbd.modules << EOF
modprobe rbd
EOF

#加载模块
[root@master01 ~]# chmod 755 /etc/sysconfig/modules/rbd.modules
[root@master01 ~]# source /etc/sysconfig/modules/rbd.modules
[root@master01 ~]# source /etc/sysconfig/modules/rbd.modules
[root@master01 ~]# lsmod |grep rbd

#部署ceph集群服务osd,mon,mgr节点打上标签
[root@master01 ceph]# kubectl label nodes {master01,node01,node02} ceph-osd=enabled
[root@master01 ceph]# kubectl label nodes {master01,node01,node02} ceph-mon=enabled
[root@master01 ceph]# kubectl label nodes {node01,node02} ceph-mgr=enabled   #只有一个mgr使用,但是就怕node01挂了,mgr的pod无法应用到k8s节点。

#创建rbac,serviceaccount,clusterrole,clusterrolebinding等资源
[root@master01 ceph]# kubectl apply -f common.yaml 

#查看
[root@master01 ceph]# kubectl get sa -n rook-ceph

[root@master01 ceph]# kubectl get clusterrole -n rook-ceph

[root@master01 ceph]# kubectl get clusterrolebinding -n rook-ceph

[root@master01 ceph]# kubectl get role -n rook-ceph

#创建自定义扩展资源对象
[root@master01 ceph]# kubectl apply -f crds.yaml 

#镜像下载脚本.推送到私有仓库harbor.od.com
[root@master01 ceph]# vi ceph-images-download.sh
#/bin/bash

#下面镜像要根据ceph的operator.yaml文件要求
image_list=(
    ceph:v15.2.8
    rook-ceph:v1.5.5
    cephcsi:v3.2.0
    csi-node-driver-registrar:v2.0.1
    csi-resizer:v1.0.0
    csi-provisioner:v2.0.0
    csi-snapshotter:v3.0.0
    csi-attacher:v3.0.0
)

aliyuncs="registry.aliyuncs.com/it00021hot"
google_gcr="k8s.gcr.io/sig-storage"
harbor="harbor.od.com/ceph"
for image in ${image_list[*]}
do
   docker image pull ${aliyuncs}/${image}

   #下面这行下载镜像后,不需要修改operator.yaml文件
   # docker image tag ${aliyuncs}/${image} ${google_gcr}/${image}

   docker image tag ${aliyuncs}/${image} ${harbor}/${image}
   docker image rm ${aliyuncs}/${image}
   docker push ${harbor}/${image}
   echo "${aliyuncs}/${image} ${google_gcr}/${image} downloaded"
done

#下载镜像
[root@yunwei ~]# sh ceph-images-download.sh 

#修改operator.yaml文件,主要是下面两个地方
[root@master01 ceph]# vi operator.yaml 
 spec:
      serviceAccountName: rook-ceph-system
      containers:
      - name: rook-ceph-operator
        image: harbor.od.com/ceph/rook-ceph:v1.5.5  #这里也要修改
        args: ["ceph", "operator"]
        volumeMounts:
        - mountPath: /var/lib/rook
          name: rook-config
        - mountPath: /etc/ceph
          name: default-config-dir
.............................................................................
...............................................................................          
  # ROOK_CSI_PROVISIONER_IMAGE: "k8s.gcr.io/sig-storage/csi-provisioner:v2.0.0"
  # ROOK_CSI_SNAPSHOTTER_IMAGE: "k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.0"
  # ROOK_CSI_ATTACHER_IMAGE: "k8s.gcr.io/sig-storage/csi-attacher:v3.0.0"

#下面添加内容,要注意缩进
  ROOK_CSI_CEPH_IMAGE: "harbor.od.com/ceph/cephcsi:v3.2.0"
  ROOK_CSI_REGISTRAR_IMAGE: "harbor.od.com/ceph/csi-node-driver-registrar:v2.0.1"
  ROOK_CSI_RESIZER_IMAGE: "harbor.od.com/ceph/csi-resizer:v1.0.0"
  ROOK_CSI_PROVISIONER_IMAGE: "harbor.od.com/ceph/csi-provisioner:v2.0.0"
  ROOK_CSI_SNAPSHOTTER_IMAGE: "harbor.od.com/ceph/csi-snapshotter:v3.0.0"
  ROOK_CSI_ATTACHER_IMAGE: "harbor.od.com/ceph/csi-attacher:v3.0.0"


#创建operator.yaml 
[root@master01 ceph]# kubectl apply -f operator.yaml 
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created

[root@master01 ceph]# kubectl get pods -n rook-ceph
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-64cbcf46cb-5hd9f   1/1     Running   0          12s

[root@master01 ceph]# kubectl get deployment -n rook-ceph
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
rook-ceph-operator   1/1     1            1           27s

#查看operator容器
[root@master01 ceph]# kubectl get pods -n rook-ceph
NAME                                  READY   STATUS            RESTARTS   AGE
rook-ceph-operator-64cbcf46cb-5hd9f   1/1     Running           0          13m

#查看operator容器日志,可以查看ceph初始化
[root@master01 ceph]# kubectl logs -f  rook-ceph-operator-64cbcf46cb-5hd9f -n rook-ceph

#由于只有三个节点,所以需要打标签。在cluster.yaml配置亲和性调到pod
[root@master01 ceph]# kubectl label nodes {master01,node02,node01} ceph-mon=enabled

[root@master01 ceph]# kubectl label nodes {master01,node02,node01} ceph-osd=enabled

[root@master01 ceph]# kubectl label nodes master01 ceph-mgr=enabled


#创建ceph的集群,下面是主要配置信息
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
# 命名空间的名字,同一个命名空间只支持一个集群
  name: rook-ceph
  namespace: rook-ceph
spec:
# ceph版本说明
# v13 is mimic, v14 is nautilus, and v15 is octopus.
  cephVersion:
#修改ceph镜像,加速部署时间
    image: harbor.foxchan.com/google_containers/ceph/ceph:v15.2.5
# 是否允许不支持的ceph版本
    allowUnsupported: false
#指定rook数据在节点的保存路径
  dataDirHostPath: /data/rook
# 升级时如果检查失败是否继续
  skipUpgradeChecks: false
# 从1.5开始,mon的数量必须是奇数
  mon:
    count: 3
# 是否允许在单个节点上部署多个mon pod
    allowMultiplePerNode: false
  mgr:
    modules:
    - name: pg_autoscaler
      enabled: true
# 开启dashboard,禁用ssl,指定端口是7000,你可以默认https配置。我是为了ingress配置省事。
  dashboard:
    enabled: true
    port: 7000
    ssl: false
# 开启prometheusRule
  monitoring:
    enabled: true
# 部署PrometheusRule的命名空间,默认此CR所在命名空间
    rulesNamespace: rook-ceph
# 开启网络为host模式,解决无法使用cephfs pvc的bug
  network:
    provider: host
# 开启crash collector,每个运行了Ceph守护进程的节点上创建crash collector pod
  crashCollector:
    disable: false
# 设置node亲缘性,指定节点安装对应组件
  placement:
    mon:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-mon
              operator: In
              values:
              - enabled
        
    osd:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-osd
              operator: In
              values:
              - enabled

    mgr:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-mgr
              operator: In
              values:
              - enabled 
# 存储的设置,默认都是true,意思是会把集群所有node的设备清空初始化。
  storage: # cluster level storage configuration and selection
    useAllNodes: false     #关闭使用所有Node
    useAllDevices: false   #关闭使用所有设备
    nodes:
    - name: "192.168.1.162"  #指定存储节点主机
      devices:
      - name: "nvme0n1p1"    #指定磁盘为nvme0n1p1
    - name: "192.168.1.163"
      devices:
      - name: "nvme0n1p1"
    - name: "192.168.1.164"
      devices:
      - name: "nvme0n1p1"
    - name: "192.168.1.213"
      devices:
      - name: "nvme0n1p1"


#修改cluster.yaml配置文件,注意缩进
[root@master01 ceph]# cat cluster.yaml|grep -v '#'
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
spec:
  cephVersion:
    image: ceph/ceph:v15.2.8      #镜像
    allowUnsupported: false
  dataDirHostPath: /var/lib/rook   #rook集群的配置数据存在主机上路径
  skipUpgradeChecks: false
  continueUpgradeAfterChecksEvenIfNotHealthy: false
  mon:
    count: 3
    allowMultiplePerNode: false
  mgr:
    modules:
    - name: pg_autoscaler
      enabled: true
  dashboard:
    enabled: true
    ssl: true
  monitoring:
    enabled: false
    rulesNamespace: rook-ceph
  network:
  crashCollector:
    disable: false
  cleanupPolicy:
    confirmation: ""
    sanitizeDisks:
      method: quick
      dataSource: zero
      iteration: 1
    allowUninstallWithVolumes: false
  placement:
    mon:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-mon
              operator: In
              values:
              - enabled
      tolerations:              ##容忍master污点,可以部署pod
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: ceph-mon     #部署节点有标签为 ceph-mon节点上
        operator: Exists
        
    osd:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-osd
              operator: In
              values:
              - enabled
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: ceph-osd
        operator: Exists
        
    mgr:
      nodeAffinity:            #配置节点亲和度
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: ceph-mgr
              operator: In
              values:
              - enabled
      tolerations:              #容忍master污点,可以部署pod
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: ceph-mgr
        operator: Exists
        
  annotations:
  labels:
  resources:
  removeOSDsIfOutAndSafeToRemove: false
  storage:
    useAllNodes: false        #关闭使用所有Node
    useAllDevices: false      #关闭使用所有设备  
    config:
      osdsPerDevice: "1"
    nodes:
    - name: "master01"    #指定节点主机,这个名称是根据节点标签labels的 kubernetes.io/hostname=master01
      config:
        storeType: bluestore                               #指定类型为裸磁盘
      devices:                                             #指定磁盘为sdb
      - name: "sdb"    
    - name: "node01"           
      config:
        storeType: bluestore   #指定类型为裸磁盘
      devices: 
      - name: "sdb"            #指定磁盘名称
    - name: "node02"
      config:
        storeType: bluestore
      devices:
      - name: "sdb"

  disruptionManagement:
    managePodBudgets: false
    osdMaintenanceTimeout: 30
    pgHealthCheckTimeout: 0
    manageMachineDisruptionBudgets: false
    machineDisruptionBudgetNamespace: openshift-machine-api

  healthCheck:
    daemonHealth:
      mon:
        disabled: false
        interval: 45s
      osd:
        disabled: false
        interval: 60s
      status:
        disabled: false
        interval: 60s
    livenessProbe:
      mon:
        disabled: false
      mgr:
        disabled: false
      osd:
        disabled: false
              
              
#创建ceph的集群              
[root@master01 ceph]# kubectl apply -f cluster.yaml 

#csi-rbdplugin,csi-cephfsplugin 是daemonset控制器,每个节点都要部署,但是他们都只有两个pod,master节点无法部署,需要下面修改配置,部署到master节点。  
[root@master01 ceph]# kubectl get pods -n rook-ceph -o wide
NAME                                                 READY   STATUS      RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
csi-cephfsplugin-8v7hs                               3/3     Running     0          40m     10.4.7.51     node02     <none>           <none>
csi-cephfsplugin-cfq78                               3/3     Running     0          40m     10.4.7.50     node01     <none>           <none>
csi-cephfsplugin-provisioner-745fcb7868-mqrhd        6/6     Running     0          40m     10.244.2.14   node02     <none>           <none>
csi-cephfsplugin-provisioner-745fcb7868-x8sww        6/6     Running     0          40m     10.244.1.11   node01     <none>           <none>
csi-rbdplugin-bwjgr                                  3/3     Running     0          41m     10.4.7.51     node02     <none>           <none>
csi-rbdplugin-gg28f                                  3/3     Running     0          41m     10.4.7.50     node01     <none>           <none>
csi-rbdplugin-provisioner-7fdb4675dc-7m7gf           6/6     Running     0          41m     10.244.2.13   node02     <none>           <none>
csi-rbdplugin-provisioner-7fdb4675dc-krt8d           6/6     Running     0          41m     10.244.1.10   node01     <none>           <none>
rook-ceph-crashcollector-master01-5f7dbf46fc-lfzkf   1/1     Running     0          9m37s   10.244.0.18   master01   <none>           <none>
rook-ceph-crashcollector-node01-7ffdfd64c8-2cksj     1/1     Running     0          11m     10.244.1.27   node01     <none>           <none>
rook-ceph-crashcollector-node02-8889897f4-x6lcp      1/1     Running     0          11m     10.244.2.26   node02     <none>           <none>
rook-ceph-mgr-a-5dcb79d55d-zzpms                     1/1     Running     0          12m     10.244.1.23   node01     <none>           <none>
rook-ceph-mon-a-68fb45ddb-9ltqt                      1/1     Running     0          13m     10.244.2.22   node02     <none>           <none>
rook-ceph-mon-b-749978f45f-nhxqv                     1/1     Running     0          13m     10.244.0.15   master01   <none>           <none>
rook-ceph-mon-c-59fff7cfc5-tn6gt                     1/1     Running     0          13m     10.244.1.21   node01     <none>           <none>
rook-ceph-operator-d459696cf-x8zq9                   1/1     Running     0          64m     10.244.1.9    node01     <none>           <none>
rook-ceph-osd-0-68585fc65-2xvj2                      1/1     Running     0          11m     10.244.2.25   node02     <none>           <none>
rook-ceph-osd-1-6b65c4f6c4-s2rrq                     1/1     Running     0          11m     10.244.1.26   node01     <none>           <none>
rook-ceph-osd-2-78788d78b6-vjsgb                     1/1     Running     0          9m37s   10.244.0.17   master01   <none>           <none>
rook-ceph-osd-prepare-master01-7wm87                 0/1     Completed   0          12m     10.244.0.16   master01   <none>           <none>
rook-ceph-osd-prepare-node01-mbjbj                   0/1     Completed   0          12m     10.244.1.25   node01     <none>           <none>
rook-ceph-osd-prepare-node02-gfkpq                   0/1     Completed   0          12m     10.244.2.24   node02     <none>           <none>


2、检查csi-rbdplugin,csi-cephfsplugin,csi-rbdplugin-provisioner,csi-cephfsplugin-provisioner的调度控制器

[root@master01 ~]# kubectl get deployment,daemonset,statefulset -n rook-ceph  | grep csi-rbdplugin
deployment.apps/csi-rbdplugin-provisioner           2/2     2            2           23h
daemonset.apps/csi-rbdplugin      3         3         2       1            2           <none>          23h

[root@master01 ~]# kubectl get deployment,daemonset,statefulset -n rook-ceph  | grep csi-cephfsplugin
deployment.apps/csi-cephfsplugin-provisioner        2/2     2            2           23h
daemonset.apps/csi-cephfsplugin   3         3         2       3            2           <none>          23h

注解:
   检查发现csi-rbdplugin-provisioner,csi-cephfsplugin-provisioner是deployment控制器,
csi-rbdplugin,csi-cephfsplugin 是daemonset控制器,daemonset控制器可以保障每个可调度节点运行一份副本,
做为节点csi驱动daemonset相比deployment更适合一些,那么csi-rbdplugin更有可能是csi驱动管理pod,
那么我们就先将它调度到master节点试试。

1.修改csi-rbdplugin的daemonset配置,可以部署到master节点。
[root@master01 ~]# kubectl edit daemonset csi-rbdplugin -n rook-ceph
。。。。。。。。。。。
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: csi-rbdplugin
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: csi-rbdplugin
        contains: csi-rbdplugin-metrics
    spec:
      affinity:
        nodeAffinity: {}
      tolerations:     #添三行,pod部署到master
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - args:
        - --v=0
        - --csi-address=/csi/csi.sock

2.修改csi-cephfsplugin 
[root@master01 ~]# kubectl edit daemonset csi-cephfsplugin -n rook-ceph
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: csi-cephfsplugin
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: csi-cephfsplugin
        contains: csi-cephfsplugin-metrics
    spec:
      affinity:
        nodeAffinity: {}
      tolerations:     #添三行,pod部署到master
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - args:
        - --v=0
        - --csi-address=/csi/csi.sock

#查看修改后pod,csi-rbdplugin,csi-cephfsplugin都是三个pod
csi-cephfsplugin-provisioner,csi-rbdplugin-provisioner的deployment配置文件定义是2个副本,所以是对的。

[root@master01 template]# kubectl get pods -n rook-ceph -o wide
NAME                                                 READY   STATUS      RESTARTS   AGE    IP            NODE       
csi-cephfsplugin-5mll4                               3/3     Running     0          25m    10.4.7.50     node01    
csi-cephfsplugin-bz6v6                               3/3     Running     0          25m    10.4.7.49     master01  
csi-cephfsplugin-hgx4v                               3/3     Running     0          25m    10.4.7.51     node02     
csi-cephfsplugin-provisioner-745fcb7868-mqrhd        6/6     Running     37         24h    10.244.2.38   node02    
csi-cephfsplugin-provisioner-745fcb7868-x8sww        6/6     Running     49         24h    10.244.1.38   node01    
csi-rbdplugin-4mm7q                                  3/3     Running     0          26m    10.4.7.51     node02    
csi-rbdplugin-9bv9k                                  3/3     Running     0          56m    10.4.7.49     master01   
csi-rbdplugin-f5wrx                                  3/3     Running     0          26m    10.4.7.50     node01    
csi-rbdplugin-provisioner-7fdb4675dc-7m7gf           6/6     Running     42         24h    10.244.2.40   node02   
csi-rbdplugin-provisioner-7fdb4675dc-krt8d           6/6     Running     43         24h    10.244.1.34   node01    
rook-ceph-crashcollector-master01-5f7dbf46fc-lfzkf   1/1     Running     2          23h    10.244.0.29   master01 
rook-ceph-crashcollector-node01-7ffdfd64c8-2cksj     1/1     Running     1          23h    10.244.1.39   node01    
rook-ceph-crashcollector-node02-8889897f4-x6lcp      1/1     Running     1          23h    10.244.2.43   node02     
rook-ceph-mgr-a-5dcb79d55d-zzpms                     1/1     Running     1          23h    10.244.1.36   node01    
rook-ceph-mon-a-68fb45ddb-9ltqt                      1/1     Running     1          23h    10.244.2.39   node02     
rook-ceph-mon-b-749978f45f-nhxqv                     1/1     Running     2          23h    10.244.0.30   master01   
rook-ceph-mon-c-59fff7cfc5-tn6gt                     1/1     Running     1          23h    10.244.1.35   node01    
rook-ceph-operator-d459696cf-x8zq9                   1/1     Running     1          24h    10.244.1.40   node01    
rook-ceph-osd-0-68585fc65-2xvj2                      1/1     Running     1          23h    10.244.2.42   node02     
rook-ceph-osd-1-6b65c4f6c4-s2rrq                     1/1     Running     1          23h    10.244.1.37   node01    
rook-ceph-osd-2-78788d78b6-vjsgb                     1/1     Running     3          23h    10.244.0.34   master01   
rook-ceph-osd-prepare-master01-kl2gp                 0/1     Completed   0          101m   10.244.0.36   master01   
rook-ceph-osd-prepare-node01-mxnn6                   0/1     Completed   0          101m   10.244.1.41   node01    
rook-ceph-osd-prepare-node02-4sm6n                   0/1     Completed   0          101m   10.244.2.44   node02    

#查看主机上的磁盘是否被调用,如果没有,就要查看下cluster.yaml配置
[root@master01 static]# lsblk -f
NAME                                                                                    FSTYPE      LABEL           UUID                                   MOUNTPOINT
sdb                                                                                     LVM2_member                 3PQAnX-rehr-fxu0-bIMJ-fYWS-Zcaz-ublUvk 
└─ceph--4ce439e6--f251--4fc3--9e75--375ed61d6d8b-osd--block--4e99fb5a--9c32--482d--897f--6b689f60c946
                                                               
#查看查看rook-ceph-operatorpod的log
[root@master01 ceph]# kubectl logs -f rook-ceph-operator-d459696cf-x8zq9 -n rook-ceph

#查看osd
[root@master01 ceph]# kubectl logs -f rook-ceph-osd-prepare-master01-7wm87 -n rook-ceph provision

#查看ceph集群的svc,其中rook-ceph-mgr-dashboard服务是后面配置ingress,外面域名可以访问的。
[root@master01 ceph]# kubectl get svc -n rook-ceph
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
csi-cephfsplugin-metrics   ClusterIP   10.103.24.98    <none>        8080/TCP,8081/TCP   93m
csi-rbdplugin-metrics      ClusterIP   10.108.163.76   <none>        8080/TCP,8081/TCP   93m
rook-ceph-mgr              ClusterIP   10.97.146.102   <none>        9283/TCP            90m
rook-ceph-mgr-dashboard    ClusterIP   10.106.30.224   <none>        8443/TCP            90m  
rook-ceph-mon-a            ClusterIP   10.107.214.89   <none>        6789/TCP,3300/TCP   93m
rook-ceph-mon-b            ClusterIP   10.98.9.202     <none>        6789/TCP,3300/TCP   92m
rook-ceph-mon-c            ClusterIP   10.97.87.235    <none>        6789/TCP,3300/TCP   91m


11、安装Toolbox
toolbox是一个rook的工具集容器,该容器中的命令可以用来调试、测试Rook,对Ceph临时测试的操作一般在这个容器内执行。
[root@master01 ceph]# kubectl apply -f toolbox.yaml 

#测试 Toolbox 是否生成
[root@master01 ceph]#  kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-tools-58d7dbc69f-d5kz4   1/1     Running   0          6m41s

#进入rook-tools 容器
[root@master01 ceph]#  kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
sh-4.4# 
sh-4.4# ceph status  #查看ceph集群状态
  cluster:
    id:     4dc66671-6fe6-4dee-9a09-cdb24bc481e9
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 72m)
    mgr: a(active, since 71m)
    osd: 3 osds: 3 up (since 69m), 3 in (since 73m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     1 active+clean
 
sh-4.4# 
sh-4.4# ceph osd status  #查看osd状态
ID  HOST       USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
 0  node02    1030M  18.9G      0        0       0        0   exists,up  
 1  node01    1030M  18.9G      0        0       0        0   exists,up  
 2  master01  1030M  18.9G      0        0       0        0   exists,up  
sh-4.4# 
sh-4.4# rados df   # 查看ceph集群的存储情况
POOL_NAME              USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED  RD_OPS   RD  WR_OPS   WR  USED COMPR  UNDER COMPR
device_health_metrics   0 B        0       0       0                   0        0         0       0  0 B       0  0 B         0 B          0 B

total_objects    0
total_used       3.0 GiB
total_avail      57 GiB
total_space      60 GiB

sh-4.4# ceph auth ls   #ceph集群各个服务的认证信息
installed auth entries:

osd.0
        key: AQDML0FikQLhCRAAUw9YbHN0X22HktxtC4k0lA==
        caps: [mgr] allow profile osd
        caps: [mon] allow profile osd
        caps: [osd] allow *
osd.1
        key: AQDTL0FiQ1WQNRAAdo77BUT8aV8KjIHz3SCPhA==
        caps: [mgr] allow profile osd
        caps: [mon] allow profile osd
        caps: [osd] allow *
osd.2
        key: AQC7MkFihCAfBxAAlD/ik5g9Xm1dZNgvpDCaIQ==
        caps: [mgr] allow profile osd
        caps: [mon] allow profile osd
        caps: [osd] allow *
client.admin
        key: AQALL0Fi8fnyIhAAIrUfMZiAcJ3Y2QNhozEAbA==
        caps: [mds] allow *
        caps: [mgr] allow *
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-mds
        key: AQAuL0FiXA+SBxAA2O4l5/BkZyXsFSRou4iF5A==
        caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
        key: AQAuL0FitCqSBxAAUlOJxEngZisvyN0pL3dFiQ==
        caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
        key: AQAuL0FiCUCSBxAAsOu8GxJH6/8Ea+7EAzMRDA==
        caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
        key: AQAuL0FiN1eSBxAAcPauOXaUXogYXx3dgKwNKw==
        caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rbd-mirror
        key: AQAuL0FioGySBxAAaApeNBU/JgyLnHyYjtPGhQ==
        caps: [mon] allow profile bootstrap-rbd-mirror
client.bootstrap-rgw
        key: AQAuL0FixIGSBxAAWhnt7ozxhd/DyDsLU+4Alw==
        caps: [mon] allow profile bootstrap-rgw
client.crash
        key: AQCnL0Fi+43UDxAAKkcYOsbycCtXAvxqmGPIqA==
        caps: [mgr] allow profile crash
        caps: [mon] allow profile crash
client.csi-cephfs-node
        key: AQCmL0Fiw5W5LhAAMINFmCHXKmofF7Cf50SpdQ==
        caps: [mds] allow rw
        caps: [mgr] allow rw
        caps: [mon] allow r
        caps: [osd] allow rw tag cephfs *=*
client.csi-cephfs-provisioner
        key: AQCmL0FiJCvXEhAA9a3JKSNZRsTamAq37CnDBw==
        caps: [mgr] allow rw
        caps: [mon] allow r
        caps: [osd] allow rw tag cephfs metadata=*
client.csi-rbd-node
        key: AQClL0Fic1evMBAAksit07nE0srJCj4jTgR+bg==
        caps: [mgr] allow rw
        caps: [mon] profile rbd
        caps: [osd] profile rbd
client.csi-rbd-provisioner
        key: AQClL0FiUKncGRAA4C4iw9FoNgZPVHq0ygrltw==
        caps: [mgr] allow rw
        caps: [mon] profile rbd
        caps: [osd] profile rbd
mgr.a
        key: AQCoL0FiRTfqDhAA5YtsWGFA/AJHuV4Cdw857w==
        caps: [mds] allow *
        caps: [mon] allow profile mgr
        caps: [osd] allow *


12.部署ceph的dashboard

#创建自签tls公私秘钥对
[root@master01 ceph]# mkdir tls
[root@master01 ceph]# cd tls/
[root@master01 tls]# openssl genrsa -out tls.key 2048
[root@master01 tls]# openssl req -new -x509 -days 3650 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/O=DevOps/CN=*.od.com
[root@master01 tls]# ll
total 8
-rw-r--r-- 1 root root 1220 Mar 28 14:37 tls.crt
-rw-r--r-- 1 root root 1679 Mar 28 14:37 tls.key

#创建ceph的dashboard的ingress的secret,由于ceph在rook-ceph名称空间里,所以创建secret也要在这个名称空间里。
[root@master01 tls]# kubectl create secret tls ceph-ingress-secret --cert=tls.crt --key=tls.key -n rook-ceph
secret/ceph-ingress-secret created

#修改ceph的dashboard文件
[root@master01 tls]# cd ..
[root@master01 ceph]# cat dashboard-ingress-https.yaml
#
# This example is for Kubernetes running an ngnix-ingress
# and an ACME (e.g. Let's Encrypt) certificate service
#
# The nginx-ingress annotations support the dashboard
# running using HTTPS with a self-signed certificate
#
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: rook-ceph-mgr-ingress  #定义ceph的ingress名字,
  namespace: rook-ceph        # namespace:cluster,名称空间
  annotations:                 #注解,一定要有,否则会有问题
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/server-snippet: |
      proxy_ssl_verify off;
spec:
  tls:
   - hosts:
     - rook-ceph.od.com       #自签tls的CN,建议一致跟下面的rules的host名字
     secretName: ceph-ingress-secret   #定义上面对od.com泛域名创建的secret的名字
  rules:
  - host: rook-ceph.od.com   #定义访问的域名
    http:
      paths:
      - path: /
        backend:
          serviceName: rook-ceph-mgr-dashboard   #这是mgr的dashboard服务名字,前面在创建ceph集群的时候就已经创建好了。
          servicePort: 8443    #rook-ceph-mgr-dashboard服务端口

#创建ceph的dashboard的ingress
[root@master01 ceph]# kubectl apply -f dashboard-ingress-https.yaml
    
[root@master01 ceph]# kubectl get ingress -n rook-ceph
NAME                    CLASS    HOSTS              ADDRESS   PORTS     AGE
rook-ceph-mgr-ingress   <none>   rook-ceph.od.com             80, 443   29m

[root@master01 ceph]# kubectl describe ingress rook-ceph-mgr-ingress -n rook-ceph
Name:             rook-ceph-mgr-ingress
Namespace:        rook-ceph
Address:          
Default backend:  default-http-backend:80 (<none>)
TLS:
  ceph-ingress-secret terminates rook-ceph.od.com
Rules:
  Host              Path  Backends
  ----              ----  --------
  rook-ceph.od.com  
                    /   rook-ceph-mgr-dashboard:8443 (10.244.1.23:8443)
Annotations:        kubernetes.io/ingress.class: nginx
                    kubernetes.io/tls-acme: true
                    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
                    nginx.ingress.kubernetes.io/server-snippet: proxy_ssl_verify off;
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  29m   nginx-ingress-controller  Ingress rook-ceph/rook-ceph-mgr-ingress
  
#配置dns,#由于ingress-collector服务是hostnetwork,而且通过标签部署pod在master01,
如果有三台master,可以指定vip,高可用

[root@master01 tls]# vi /var/named/od.com.zone 
$ORIGIN od.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.od.com. dnsadmin.od.com. (
                                2020011201 ; serial
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.49
harbor             A    10.4.7.53
demo               A    10.4.7.49
rook-ceph          A    10.4.7.49      

#加载一下named服务
[root@master01 tls]# service named restart

#另外一种方式就直接修改 rook-ceph-mgr-dashboard 的service,
[root@master01 ceph]# kubectl get svc -n rook-ceph
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
csi-cephfsplugin-metrics   ClusterIP   10.100.248.219   <none>        8080/TCP,8081/TCP   58m
csi-rbdplugin-metrics      ClusterIP   10.109.65.11     <none>        8080/TCP,8081/TCP   58m
rook-ceph-mgr              ClusterIP   10.100.72.151    <none>        9283/TCP            53m
rook-ceph-mgr-dashboard    ClusterIP    10.111.104.197   <none>        8443/TCP      53m
rook-ceph-mon-a            ClusterIP   10.101.3.133     <none>        6789/TCP,3300/TCP   58m
rook-ceph-mon-b            ClusterIP   10.111.38.134    <none>        6789/TCP,3300/TCP   55m
rook-ceph-mon-c            ClusterIP   10.104.208.191   <none>        6789/TCP,3300/TCP   54m

#修改type类型,默认是ClusterIp,改为NodePort
[root@master01 tls]# kubectl edit svc  rook-ceph-mgr-dashboard -n rook-ceph

#查看rook-ceph-mgr-dashboard 的service服务type为NodePort
[root@master01 tls]# kubectl get svc -n rook-ceph
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
csi-cephfsplugin-metrics   ClusterIP   10.100.248.219   <none>        8080/TCP,8081/TCP   64m
csi-rbdplugin-metrics      ClusterIP   10.109.65.11     <none>        8080/TCP,8081/TCP   64m
rook-ceph-mgr              ClusterIP   10.100.72.151    <none>        9283/TCP            59m
rook-ceph-mgr-dashboard    NodePort    10.111.104.197   <none>        8443:31004/TCP      59m
rook-ceph-mon-a            ClusterIP   10.101.3.133     <none>        6789/TCP,3300/TCP   63m
rook-ceph-mon-b            ClusterIP   10.111.38.134    <none>        6789/TCP,3300/TCP   60m
rook-ceph-mon-c            ClusterIP   10.104.208.191   <none>        6789/TCP,3300/TCP   59m

#浏览器直接用node节点ip:31004访问

15、查看登录密码
[root@master01 ceph]# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath='{.data.password}'  |  base64 --decode && echo
8ynOK:Wb${/)/;0SY[9(

#修改密码为:admin123,但是重启后,密码又变了

16.浏览器登录 https://rook-ceph.od.com
用户:admin
密码:8ynOK:Wb${/)/;0SY[9(


#卸载ceph集群
1.删除之前执行的yaml文件
[root@master01 ceph]# kubectl delete -f crds.yaml -f common.yaml -f operator.yaml  -f cluster.yaml

2.删除ceph集群的data目录
[root@master01 ceph]# rm /var/lib/rook/ -rf

3:查看资源
[root@master01 tmp]# kubectl -n rook-ceph get cephcluster

4.删除rook-ceph名称空间
[root@master01 ceph]# kubectl get ns rook-ceph -o json >/tmp/tmp.json
[root@master01 ceph]# kubectl delete namespace rook-ceph

5.删除名称空间rook-ceph时,出现Terminating,开启临时一个服务
[root@master01 ~]# kubectl proxy --port=9098
Starting to serve on 127.0.0.1:9098

6.开启另外一个终端,删除tmp.json文件里的spec内容
[root@master01 ceph]# cd /tmp/
[root@master01 tmp]# vi tmp.json 
[root@master01 tmp]# curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:9098/api/v1/namespaces/rook-ceph/finalize

7.再次查看集群状态
[root@master01 tmp]# kubectl -n rook-ceph get cephcluster

8.之前的磁盘清零
[root@master01 ceph]# dd if=/dev/zero of=/dev/sdb bs=1M status=progress

17.ceph配置rdb,一般是配置有状态服务statefulset,
[root@master01 rbd]# cd /root/rook/rook-1.5.5/cluster/examples/kubernetes/ceph/csi/rbd
[root@master01 rbd]# cat storageclass.yaml 
apiVersion: ceph.rook.io/v1
kind: CephBlockPool   #块地址池
metadata:
  name: replicapool   #定义块地址池名称
  namespace: rook-ceph  #在rook-ceph名称空间创建块存储池。
spec:
  failureDomain: osd   #容灾模式host或者osd
  replicated:
    size: 3  #数据副本个数,一个节点挂了,不会丢失数据,另外还有两个副本
    # Disallow setting pool with replica 1, this could lead to data loss without recovery.
    # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
    requireSafeReplicaSize: true
    # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
    # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
    #targetSizeRatio: .5
---
apiVersion: storage.k8s.io/v1
kind: StorageClass    #储存类型
metadata:
   name: rook-ceph-block  #创建一个块存储名字,储存是针对集群的,不针对名称空间的
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com   #提供rdb块存储的供应商。
parameters:
    # clusterID is the namespace where the rook cluster is running
    # If you change this namespace, also change the namespace below where the secret namespaces are defined
    clusterID: rook-ceph # namespace:cluster

    # If you want to use erasure coded pool with RBD, you need to create
    # two pools. one erasure coded and one replicated.
    # You need to specify the replicated pool here in the `pool` parameter, it is
    # used for the metadata of the images.
    # The erasure coded pool must be set as the `dataPool` parameter below.
    #dataPool: ec-data-pool
    pool: replicapool   #指定上面创建的块储存池名称

    # (optional) mapOptions is a comma-separated list of map options.
    # For krbd options refer
    # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
    # For nbd options refer
    # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
    # mapOptions: lock_on_read,queue_depth=1024

    # (optional) unmapOptions is a comma-separated list of unmap options.
    # For krbd options refer
    # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
    # For nbd options refer
    # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
    # unmapOptions: force

    # RBD image format. Defaults to "2".
    imageFormat: "2"

    # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
    imageFeatures: layering

    # The secrets contain Ceph admin credentials. These are generated automatically by the operator
    # in the same namespace as the cluster.
    csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
    csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
    csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
    csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
    csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
    csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
    # Specify the filesystem type of the volume. If not specified, csi-provisioner
    # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
    # in hyperconverged settings where the volume is mounted on the same node as the osds.
    csi.storage.k8s.io/fstype: ext4
# uncomment the following to use rbd-nbd as mounter on supported nodes
# **IMPORTANT**: If you are using rbd-nbd as the mounter, during upgrade you will be hit a ceph-csi
# issue that causes the mount to be disconnected. You will need to follow special upgrade steps
# to restart your application pods. Therefore, this option is not recommended.
#mounter: rbd-nbd
allowVolumeExpansion: true
reclaimPolicy: Delete

#可以默认参数,
[root@master01 rbd]# kubectl apply -f storageclass.yaml 
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

#查看创建的块池
[root@master01 rbd]# kubectl get cephblockpool -A
NAMESPACE   NAME          AGE
rook-ceph   replicapool   92m

#查看刚才创建的存储类名称,也叫存储驱动
[root@master01 rbd]# kubectl get sc 
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   27m

#在创建一个rook-cephfs的共享文件存储驱动
[root@master01 ~]# cd /root/rook/rook-1.5.5/cluster/examples/kubernetes/ceph/

#修改容灾模式为osd
[root@master01 ceph]# sed -i 's/failureDomain: host/failureDomain: osd/g' filesystem.yaml

#创建文件系统
[root@master01 ceph]# cat filesystem.yaml
#############################################################################
# Create a filesystem with settings with replication enabled for a production environment.
# A minimum of 3 OSDs on different nodes are required in this example.
#  kubectl create -f filesystem.yaml
#############################################################################

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: myfs  #创建fs的名字
  namespace: rook-ceph # namespace:cluster
spec:
  # The metadata pool spec. Must use replication.
  metadataPool:
    replicated:
      size: 3
      requireSafeReplicaSize: true
    parameters:
      # Inline compression mode for the data pool
      # Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
      compression_mode: none
        # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
      # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
      #target_size_ratio: ".5"
  # The list of data pool specs. Can use replication or erasure coding.
  dataPools:
    - failureDomain: osd
      replicated:
        size: 3
        # Disallow setting pool with replica 1, this could lead to data loss without recovery.
        # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
        requireSafeReplicaSize: true
      parameters:
        # Inline compression mode for the data pool
        # Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
        compression_mode: none
          # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
        # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
        #target_size_ratio: ".5"
  # Whether to preserve filesystem after CephFilesystem CRD deletion
  preserveFilesystemOnDelete: true
  # The metadata service (mds) configuration
  metadataServer:
    # The number of active MDS instances
    activeCount: 1
    # Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover.
    # If false, standbys will be available, but will not have a warm cache.
    activeStandby: true
    # The affinity rules to apply to the mds deployment
    placement:
    #  nodeAffinity:
    #    requiredDuringSchedulingIgnoredDuringExecution:
    #      nodeSelectorTerms:
    #      - matchExpressions:
    #        - key: role
    #          operator: In
    #          values:
    #          - mds-node
    #  topologySpreadConstraints:
    #  tolerations:
    #  - key: mds-node
    #    operator: Exists
    #  podAffinity:
       podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - rook-ceph-mds
            # topologyKey: kubernetes.io/hostname will place MDS across different hosts
            topologyKey: kubernetes.io/hostname
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - rook-ceph-mds
              # topologyKey: */zone can be used to spread MDS across different AZ
              # Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower
              # Use <topologyKey: topology.kubernetes.io/zone>  in k8s cluster is v1.17 or upper
              topologyKey: topology.kubernetes.io/zone
    # A key/value list of annotations
    annotations:
    #  key: value
    # A key/value list of labels
    labels:
    #  key: value
    resources:
    # The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory
    #  limits:
    #    cpu: "500m"
    #    memory: "1024Mi"
    #  requests:
    #    cpu: "500m"
    #    memory: "1024Mi"
    # priorityClassName: my-priority-class
    
#创建 CephFilesystem
[root@master01 ceph]# kubectl apply -f filesystem.yaml

[root@master01 cephfs]# kubectl get CephFilesystem -n rook-ceph
NAME   ACTIVEMDS   AGE
myfs   1           72s    

#这个时候会运行mds两个pod
[root@master01 ceph]# kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME                                    READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-a-6d69895657-bl5cs   1/1     Running   0          4m28s
rook-ceph-mds-myfs-b-b6d546f87-pjm7p    1/1     Running   0          4m25s

#创建cephfs文件共享存储驱动
[root@master01 cephfs]# cd /root/rook/rook-1.5.5/cluster/examples/kubernetes/ceph/csi/cephfs
[root@master01 cephfs]# vi storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com # driver:namespace:operator
parameters:
  # clusterID is the namespace where operator is deployed.
  clusterID: rook-ceph # namespace:cluster

  # CephFS filesystem name into which the volume shall be created
  fsName: myfs   #要指定已经创建好的CephFilesystem

  # Ceph pool into which the volume shall be created
  # Required for provisionVolume: "true"
  pool: myfs-data0

  # Root path of an existing CephFS volume
  # Required for provisionVolume: "false"
  # rootPath: /absolute/path

  # The secrets contain Ceph admin credentials. These are generated automatically by the operator
  # in the same namespace as the cluster.
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster

  # (optional) The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel)
  # If omitted, default volume mounter will be used - this is determined by probing for ceph-fuse
  # or by setting the default mounter explicitly via --volumemounter command-line argument.
  mounter: kernel
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  # uncomment the following line for debugging
  #- debug

#创建rook-cephfs存储驱动
[root@master01 cephfs]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/rook-cephfs created

#查看已经创建了两个存储驱动。需要存储可以调用这些驱动,自动创建pvc
[root@master01 cephfs]# kubectl get sc
NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   19h
rook-cephfs       rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   5s
 

#测试rbd只支持有状态服务,
1.创建nginx的无状态应用
[root@master01 ceph]# cat /root/nginx-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-nginx
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1           # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16
        ports:
        - containerPort: 80
        volumeMounts:
        - name: localtime
          mountPath: /etc/localtime
        - name: nginx-html-storage
          mountPath: /usr/share/nginx/html
      volumes:
       - name: localtime
         hostPath:
           path: /usr/share/zoneinfo/Asia/Shanghai
       - name: nginx-html-storage
         persistentVolumeClaim:
           claimName: nginx-pv-claim   #指定名为nginx-pv-claim的pvc

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-pv-claim    #创建名为nginx-pv-claim的pvc
  labels:
    app: nginx
spec:
  storageClassName: rook-ceph-block  #指定上面创建存储引擎名为 rook-ceph-block 
  accessModes:
    - ReadWriteMany    #访问模式为多节点同时访问
  resources:
    requests:
      storage: 1Gi   #容量为1Gi

#创建nginx的应用,
[root@master01 ~]# kubectl apply -f nginx-dp.yaml
deployment.apps/test-nginx created
persistentvolumeclaim/nginx-pv-claim created

#查看pvc,状态为pending,肯定不正常。
[root@master01 ~]# kubectl get pvc
NAME             STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
nginx-pv-claim   Pending                                      rook-ceph-block   10s

2,。查看创建的pvc,状态为pending,肯定不正常。
[root@master01 ceph]# kubectl get pvc
NAME             STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
nginx-pv-claim   Pending                                      rook-ceph-block   4m48s

3.查看pvc,报错了
[root@master01 ~]# kubectl describe pvc nginx-pv-claim

# failed to provision volume with StorageClass "rook-ceph-block": rpc 
error: code = InvalidArgument desc = multi node access modes are 
only supported on rbd `block` type volumes
意思是说:意思是不推荐在ceph rbd模式下使用RWX访问控制,如果应用层没有访问锁机制,可能会造成数据损坏


#修改定义的pvc的访问模式为ReadWriteOnce
1.修改配置
[root@master01 ceph]# cat /root/nginx-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-nginx
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1           # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16
        ports:
        - containerPort: 80
        volumeMounts:
        - name: localtime
          mountPath: /etc/localtime
        - name: nginx-html-storage
          mountPath: /usr/share/nginx/html
      volumes:
       - name: localtime
         hostPath:
           path: /usr/share/zoneinfo/Asia/Shanghai
       - name: nginx-html-storage
         persistentVolumeClaim:
           claimName: nginx-pv-claim   #指定名为nginx-pv-claim的pvc,或者已经存在的pvc的名字
           readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-pv-claim    #创建名为nginx-pv-claim的pvc
  labels:
    app: nginx
spec:
  storageClassName: rook-ceph-block  #指定上面创建存储引擎名为 rook-ceph-block 
  accessModes:
    - ReadWriteOnce    #访问模式为单节点访问
  resources:
    requests:
      storage: 1Gi   #容量为1Gi
      
2.创建pvc,nginx应用并查看pvc跟pod
[root@master01 ~]# kubectl apply -f nginx-dp.yaml
deployment.apps/test-nginx created
persistentvolumeclaim/nginx-pv-claim created

[root@master01 ~]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
nginx-pv-claim   Bound    pvc-6d2d0264-48fc-4edf-9433-875b0512ec5f   1Gi      RWO            rook-ceph-block   6s  

3.查看pod
[root@master01 ~]# kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
test-nginx-6f6584b4b-zgksm   1/1     Running   0          61s
 


# 使用cephfs的存储驱动来存储共享文件cpch-cephFS测试
案例一:多容器共享同一个数据目录,部署多个私有仓库共享同一个数据目录进行测试
[root@master01 cephfs]# cd /root/rook/rook-1.5.5/cluster/examples/kubernetes/ceph/csi/cephfs 
[root@master01 cephfs]# ls
kube-registry.yaml  pod.yaml  pvc-clone.yaml  pvc-restore.yaml  pvc.yaml  rook-cephfs.yaml  snapshotclass.yaml  snapshot.yaml  storageclass.yaml

#创建test的名称空间
[root@master01 cephfs]# kubectl create ns test

#在test名称空间创建2个pod
[root@master01 cephfs]# cat kube-registry.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc     #通过下面的cephfs存储驱动为rook-cephfs来创建pvc
  namespace: test      #pvc是针对名称空间的
spec:
  accessModes:
  - ReadWriteMany        #访问模式为多节点读写访问
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-cephfs    #指定存储驱动名为:rook-cephfs,这个存储驱动是cephfs驱动。
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-registry
  namespace: test           #要挂载test的pvc,所以就需要在test名称空间创建pod,才可以识别
  labels:
    k8s-app: kube-registry
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 2   #用两个副本
  selector:
    matchLabels:
      k8s-app: kube-registry
  template:
    metadata:
      labels:
        k8s-app: kube-registry
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: registry
        image: registry:2
        imagePullPolicy: Always
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
        env:
        # Configuration reference: https://docs.docker.com/registry/configuration/
        - name: REGISTRY_HTTP_ADDR
          value: :5000
        - name: REGISTRY_HTTP_SECRET
          value: "Ple4seCh4ngeThisN0tAVerySecretV4lue"
        - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
          value: /var/lib/registry
        volumeMounts:
        - name: image-store    #这里指定挂载逻辑名字,意思就是把名为:cephfs-pvc的pvc挂载到容器里,
          mountPath: /var/lib/registry   #挂载到容器这个目录下
        ports:
        - containerPort: 5000
          name: registry
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /
            port: registry
        readinessProbe:
          httpGet:
            path: /
            port: registry
      volumes:
      - name: image-store    #定义挂载逻辑名字
        persistentVolumeClaim:
          claimName: cephfs-pvc  #定义上面创建的pvc
          readOnly: false
          
          
#创建挂载共享文件存储pod
[root@master01 cephfs]# kubectl apply -f kube-registry.yaml 
  
[root@master01 cephfs]# kubectl get pvc -n test
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc   Bound    pvc-933ab84b-f298-4c42-946d-5560a367caf1   1Gi        RWX            rook-cephfs    19s

[root@master01 cephfs]# kubectl get pods -n test
NAME                             READY   STATUS    RESTARTS   AGE
kube-registry-58659ff99b-bvd52   1/1     Running   0          49s
kube-registry-58659ff99b-gvcbz   1/1     Running   0          49s

#进入其中一个pod。
[root@master01 cephfs]# kubectl exec -it kube-registry-58659ff99b-bvd52 -n test sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
/ # 
/ # cd /var/lib/registry/
/var/lib/registry # echo "abc" > test.txt 
/var/lib/registry # cat test.txt 
abc

#进入另外一个pod
[root@master01 cephfs]# kubectl exec -it kube-registry-58659ff99b-gvcbz -n test sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
/ # 
/ # cat /var/lib/registry/test.txt 
abc

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值