【Kubernetes】k8s集群部署rook-ceph集群存储系统以及挂载

1)Rook环境部署

原文出处,如有侵权,可私信我删除

Ceph是一个分布式文件系统,旨在提供高性能、可靠性和可扩展性。它最初由Sage Weil在UCSC开发,作为一个存储系统的研究项目。Ceph的设计目标是能够轻松扩展到数PB的容量,支持多种工作负载的高性能(如每秒输入/输出操作[IOPS]和带宽),并且具有高可靠性。它利用POSIX兼容性完成所有这些任务,允许它对当前依赖POSIX语义的应用进行透明的部署。

Ceph的生态系统包括四个主要部分:

  1. Clients:客户端(数据用户)。
  2. cmds:Metadata server cluster,元数据服务器(缓存和同步分布式元数据)。
  3. cosd:Object storage cluster,对象存储集群(将数据和元数据作为对象存储,执行其他关键职能)。
  4. cmon:Cluster monitors,集群监视器(执行监视功能)。

Ceph的核心是RADOS,这是一个分布式存储系统,所有存储功能都是基于RADOS实现的。RADOS采用C++开发,提供原生Librados API,包括C和C++两种。Ceph的上层应用调用本机上的librados API,再由后者通过socket与RADOS集群中的其他节点通信并完成各种操作。

img

1、前置准备

# 创建namespace
kubectl create namespace rook-ceph

# 一 拉取rook官方安装文件
git clone --single-branch --branch v1.6.7 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph

#创建RBAC相关secrets权限 、rook的crd组件,主要用于管理控制Ceph集群
kubectl create -f crds.yaml -f common.yaml 

# 二 创建operator yaml文件
# 修改Rook CSI 镜像地址,默认地址可能是gcr国外镜像,在国内无法正常访问,因此需要同步gcr镜像到阿里云镜像仓库或者其他本地仓库
vim operator.yaml
......
ROOK_CSI_CEPH_IMAGE: "registry.cn-qingdao.aliyuncs.com/zz_google_containers/cephcsi:v3.3.1"
ROOK_CSI_REGISTRAR_IMAGE: "registry.cn-qingdao.aliyuncs.com/zz_google_containers/csi-node-driver-registrar:v2.2.0"
ROOK_CSI_RESIZER_IMAGE: "registry.cn-qingdao.aliyuncs.com/zz_google_containers/csi-resizer:v1.2.0"
ROOK_CSI_PROVISIONER_IMAGE: "registry.cn-qingdao.aliyuncs.com/zz_google_containers/csi-provisioner:v2.2.2"
ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.cn-qingdao.aliyuncs.com/zz_google_containers/csi-snapshotter:v4.1.1"
ROOK_CSI_ATTACHER_IMAGE: "registry.cn-qingdao.aliyuncs.com/zz_google_containers/csi-attacher:v3.2.1"
......
image: registry.cn-qingdao.aliyuncs.com/zz_google_containers/ceph:v1.6.7 # 也需要修改
......
ROOK_ENABLE_DISCOVERY_DAEMON: "true"

# 创建operator
kubectl create -f operator.yaml
[root@k8s-master ceph]# kubectl get po -n rook-ceph 
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-7678595675-pkzt8   1/1     Running   0          3m14s
rook-discover-48wml                   1/1     Running   0          53s
rook-discover-tb2wl                   1/1     Running   0          2m26s
rook-discover-vnq62                   1/1     Running   0          2m26s

2、创建ceph集群

# 修改配置如下图
vim cluster.yaml
# Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named
# nodes below will be used as storage resources.  Each node's 'name' field should match their 'kubernetes.io/hostname' label.
    nodes:
    - name: "k8s-master"
      devices: # specific devices to use for storage can be specified for each node
      - name: "vdb"
    - name: "k8s-node1"
      devices: # specific devices to use for storage can be specified for each node
      - name: "vdb"
    - name: "k8s-node2"
      devices: # specific devices to use for storage can be specified for each node
      - name: "vdb"
......
dashboard:
    enabled: true
    # serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
    # urlPrefix: /ceph-dashboard
    # serve the dashboard at the given port.
    # port: 8443
    # serve the dashboard using SSL
    ssl: false
......
image: registry.cn-qingdao.aliyuncs.com/zz_google_containers/ceph:v15.2.13

# 创建
kubectl create -f cluster.yaml
[root@k8s-master ceph]# kubectl -n rook-ceph get pod 
NAME                                                   READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-cm8v2                                 3/3     Running     0          3m14s
csi-cephfsplugin-j2s99                                 3/3     Running     0          3m14s
csi-cephfsplugin-provisioner-84d6c75cd8-9j96k          6/6     Running     0          3m14s
csi-cephfsplugin-provisioner-84d6c75cd8-9ndmk          6/6     Running     0          3m14s
csi-cephfsplugin-zz2c6                                 3/3     Running     0          3m14s
csi-rbdplugin-dxmqk                                    3/3     Running     0          3m16s
csi-rbdplugin-gsr9n                                    3/3     Running     0          3m16s
csi-rbdplugin-kjmzq                                    3/3     Running     0          3m16s
csi-rbdplugin-provisioner-57659bb697-kg7ls             6/6     Running     0          3m15s
csi-rbdplugin-provisioner-57659bb697-ttspc             6/6     Running     0          3m15s
rook-ceph-crashcollector-k8s-master-58d7f97766-5gbc7   1/1     Running     0          69s
rook-ceph-crashcollector-k8s-node1-5d5759d69c-477fj    1/1     Running     0          76s
rook-ceph-crashcollector-k8s-node2-7bb54d8684-cvcsg    1/1     Running     0          79s
rook-ceph-mgr-a-688cb5d68b-jbfrb                       1/1     Running     0          82s
rook-ceph-mon-a-5789f595c5-66sxs                       1/1     Running     0          3m22s
rook-ceph-mon-b-67666946cb-zn5h4                       1/1     Running     0          2m37s
rook-ceph-mon-c-775f444845-6lvpx                       1/1     Running     0          2m18s
rook-ceph-operator-7678595675-pkzt8                    1/1     Running     0          10m
rook-ceph-osd-0-67dfd4d474-84t8r                       1/1     Running     0          70s
rook-ceph-osd-1-d5c9488c7-fsvfg                        1/1     Running     0          70s
rook-ceph-osd-2-6b475dfc97-vhwfw                       1/1     Running     0          69s
rook-ceph-osd-prepare-k8s-master-97s4n                 0/1     Completed   0          50s
rook-ceph-osd-prepare-k8s-node1-7xmmr                  0/1     Completed   0          48s
rook-ceph-osd-prepare-k8s-node2-8q8w4                  0/1     Completed   0          46s
rook-discover-48wml                                    1/1     Running     0          7m51s
rook-discover-tb2wl                                    1/1     Running     0          9m24s
rook-discover-vnq62                                    1/1     Running     0          9m24s

3、安装Ceph Snapshot控制器

k8s 1.19版本以上需要单独安装snapshot控制器,才能完成pvc的快照功能,所以在此提前安装下,如果是1.19以下版本,不需要单独安装。此资料所用k8s环境为1.20.15

# 所用yaml如下,可自行下载更改参数,我这里将副本数改为了1,其他不变
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
# 想办法拉取到所用的镜像
registry.k8s.io/sig-storage/snapshot-controller:v6.2.1

# 查看snapshot部署情况
[root@master-docker-247.19 /home/snapshot]# kubectl  get po -n kube-system -l app=snapshot-controller
NAME                                   READY   STATUS    RESTARTS   AGE
snapshot-controller-7966c69dcf-cxzlh   1/1     Running   0          10m

# snapshot可用测试
# 查看可用的pvc
[root@master-docker-247.19 /home/snapshot]# kubectl get pvc 
NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
redis-data                         Bound    pvc-3d28f2d1-9a40-4ed1-969d-dd766d9bd3dd   15Gi       RWX            nfs-vela-test   19d
# 创建VolumeSnapshot (test-snapshot.yaml)
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: new-snapshot-test
spec:
  volumeSnapshotClassName: csi-hostpath-snapclass
  source:
    persistentVolumeClaimName: redis-data
[root@master-docker-247.19 /home/snapshot]# kubectl apply -f test-snapshot.yaml 
volumesnapshot.snapshot.storage.k8s.io/new-snapshot-test created
[root@master-docker-247.19 /home/snapshot]# kubectl get VolumeSnapshot
NAME                READYTOUSE   SOURCEPVC    SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS            SNAPSHOTCONTENT   CREATIONTIME   AGE
new-snapshot-test                redis-data                                         csi-hostpath-snapclass                                    39s
# VolumeSnapshot创建成功的话就说明可以用了

4、安装Ceph客户端工具

cd rook/cluster/examples/kubernetes/ceph
kubectl  create -f toolbox.yaml -n rook-ceph
[root@k8s-master ~]# kubectl  get po -n rook-ceph -l app=rook-ceph-tools
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-tools-656b876c47-84b7d   1/1     Running   0          48m

# 进入容器命令行方式查看集群状态
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash

[root@k8s-master ceph]# kubectl get po -n rook-ceph 
NAME                                                   READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-cm8v2                                 3/3     Running     0          18m
csi-cephfsplugin-j2s99                                 3/3     Running     0          18m
csi-cephfsplugin-provisioner-84d6c75cd8-9j96k          6/6     Running     0          18m
csi-cephfsplugin-provisioner-84d6c75cd8-9ndmk          6/6     Running     0          18m
csi-cephfsplugin-zz2c6                                 3/3     Running     0          18m
csi-rbdplugin-dxmqk                                    3/3     Running     0          18m
csi-rbdplugin-gsr9n                                    3/3     Running     0          18m
csi-rbdplugin-kjmzq                                    3/3     Running     0          18m
csi-rbdplugin-provisioner-57659bb697-kg7ls             6/6     Running     0          18m
csi-rbdplugin-provisioner-57659bb697-ttspc             6/6     Running     0          18m
rook-ceph-crashcollector-k8s-master-58d7f97766-5gbc7   1/1     Running     0          16m
rook-ceph-crashcollector-k8s-node1-5d5759d69c-477fj    1/1     Running     0          16m
rook-ceph-crashcollector-k8s-node2-7bb54d8684-cvcsg    1/1     Running     0          16m
rook-ceph-mgr-a-688cb5d68b-jbfrb                       1/1     Running     0          16m
rook-ceph-mon-a-5789f595c5-66sxs                       1/1     Running     0          18m
rook-ceph-mon-b-67666946cb-zn5h4                       1/1     Running     0          17m
rook-ceph-mon-c-775f444845-6lvpx                       1/1     Running     0          17m
rook-ceph-operator-7678595675-pkzt8                    1/1     Running     0          25m
rook-ceph-osd-0-67dfd4d474-84t8r                       1/1     Running     0          16m
rook-ceph-osd-1-d5c9488c7-fsvfg                        1/1     Running     0          16m
rook-ceph-osd-2-6b475dfc97-vhwfw                       1/1     Running     0          16m
rook-ceph-osd-prepare-k8s-master-97s4n                 0/1     Completed   0          15m
rook-ceph-osd-prepare-k8s-node1-7xmmr                  0/1     Completed   0          15m
rook-ceph-osd-prepare-k8s-node2-8q8w4                  0/1     Completed   0          15m
rook-ceph-tools-656b876c47-84b7d                       1/1     Running     0          26s
rook-discover-48wml                                    1/1     Running     0          22m
rook-discover-tb2wl                                    1/1     Running     0          24m
rook-discover-vnq62                                    1/1     Running     0          24m
[root@k8s-master ceph]# kubectl  get po -n rook-ceph -l app=rook-ceph-tools
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-tools-656b876c47-84b7d   1/1     Running   0          37s
[root@k8s-master ceph]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
[root@rook-ceph-tools-656b876c47-84b7d /]# ceph -s
  cluster:
    id:     c3d3ea46-ec82-4ba2-9ac2-7aebe7af6e5c
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 16m)
    mgr: a(active, since 16m)
    osd: 3 osds: 3 up (since 16m), 3 in (since 16m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     1 active+clean
 
[root@rook-ceph-tools-656b876c47-84b7d /]# ceph osd status
ID  HOST         USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
 0  k8s-node1   1026M  18.9G      0        0       0        0   exists,up  
 1  k8s-node2   1026M  18.9G      0        0       0        0   exists,up  
 2  k8s-master  1026M  18.9G      0        0       0        0   exists,up  

5、Dashboard

dashboard-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ceph-dash-board-ingress
  namespace: rook-ceph
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: ceph.yaya.com
    http:
      paths:
      - backend:
          service:
            name: rook-ceph-mgr-dashboard
            port:
              number: 7000
        path: /
        pathType: Prefix

6、登录密码获取

kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 -d
8+SFPxfJ"a2(>CvEC<R,
# 用户为admin

7、块存储

块存储一般用于一个pod挂在一块存储使用,相当于一个服务器新挂了一个盘,只给一个应用使用。

cd rook/cluster/examples/kubernetes/ceph/csi/rbd/
kubectl create -f storageclass.yaml

[root@k8s-master rbd]# kubectl  get cephblockpool -n rook-ceph
NAME          AGE
replicapool   2m32s
[root@k8s-master rbd]# kubectl  get storageclasses.storage.k8s.io 
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   2m41s

# 此时进入pod查看集群状态,此时pool的数量为2,由于这里dashboard不知道什么原因无法访问,所以只能这么判断,查看创建成功的方式是看dashboard里面的pools
[root@rook-ceph-tools-656b876c47-84b7d /]# ceph -s
  cluster:
    id:     c3d3ea46-ec82-4ba2-9ac2-7aebe7af6e5c
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 72m)
    mgr: a(active, since 71m)
    osd: 3 osds: 3 up (since 71m), 3 in (since 71m)
 
  data:
    pools:   2 pools, 33 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     33 active+clean

img

创建StorageClass和存储池

kubectl create -f storageclass.yaml -n rook-ceph

# 查看是否创建成功
kubectl get cephblockpool -n rook-ceph
kubectl get sc

img

此时可以在ceph dashboard查看到改Pool,如果没有显示说明没有创建成功

img

挂载测试

使用官网推荐的一个WordPress案例

mysql.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
    tier: mysql
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: changeme
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage #存储挂载到pod的/var/lib/mysql
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage  #mysql存储配置
          persistentVolumeClaim:
            claimName: mysql-pv-claim

wordpress.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  #type: LoadBalancer
  type: NodePort  #为了方便测试,我这里改成NodePort模式
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim  #创建pvc
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block  #声明为我们之前创建的块存储
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
    tier: frontend
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
        - image: wordpress:4.6.1-apache
          name: wordpress
          env:
            - name: WORDPRESS_DB_HOST
              value: wordpress-mysql
            - name: WORDPRESS_DB_PASSWORD
              value: changeme
          ports:
            - containerPort: 80
              name: wordpress
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
      volumes:
        - name: wordpress-persistent-storage
          persistentVolumeClaim:
            claimName: wp-pv-claim

然后创建,访问wordpress界面

img

查看pv、pvc,指定storageclass后,系统自动创建了PV,和我们的PVC进行了绑定

img

界面查看存储

img

当我们的statefulset服务需要每个pod都有一个独立的存储时,我们可以使用volumeClaimTemplates参数来实现。

nginx-standlone-volume.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "rook-ceph-block"
      resources:
        requests:
          storage: 1Gi

上面volumeClaimTemplates我们直接指定了storageclass为rook-ceph-block。

img

说明:volumeClaimTemplates是statefulset独有的配置。

8、共享文件系统

使用场景:共享文件系统一般用于多个Pod共享一个存储,比如nginx网站的文件,必须是一致的,多个pod的共享一份站点文件。

官方文档:https://rook.io/docs/rook/v1.6/ceph-filesystem.html

# 创建共享类型的文件系统
cd /root/rook/cluster/examples/kubernetes/ceph
kubectl apply -f filesystem.yaml

# 创建完成后会启动mds容器,需要等待启动后才可进行创建pv
[root@k8s-master-4 ~/rook/cluster/examples/kubernetes/ceph]# kubectl -n rook-ceph get po -l app=rook-ceph-mds
NAME                                    READY   STATUS    RESTARTS   AGE
rook-ceph-mds-myfs-a-699dd97c4b-zw9j5   1/1     Running   0          24m
rook-ceph-mds-myfs-b-596fcf788d-6t8sz   1/1     Running   0          24m

# 创建共享类型文件系统的StorageClass
# /root/rook/cluster/examples/kubernetes/ceph/csi/cephfs
kubectl create -f storageclass.yaml

# 之后将pvc的storageClassName设置成rook-cephfs即可创建共享文件类型的存储,类似于NFS,可以给多个Pod共享数据。
# 注意:这里的csidriver必须要存在。底层接口有csidriver实现。
kubectl get csidriver
kubectl get sc

img

nginx挂载测试

nginx-test.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  selector:
    app: nginx
  type: ClusterIP
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-share-pvc
spec:
  storageClassName: rook-cephfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
        - name: www
          persistentVolumeClaim:
            claimName: nginx-share-pvc

查看挂载结果

[root@k8s-master-4 ~]# kubectl exec -it web-7bf54cbc8d-c7z29 -- bash
root@web-7bf54cbc8d-c7z29:/# cd /usr/share/nginx/html/
root@web-7bf54cbc8d-c7z29:/usr/share/nginx/html# ls
root@web-7bf54cbc8d-c7z29:/usr/share/nginx/html# echo "hello rook">index.html
root@web-7bf54cbc8d-c7z29:/usr/share/nginx/html# exit
exit
[root@k8s-master-4 ~]# kubectl exec -it web-
web-0                 web-2                 web-7bf54cbc8d-gjr5r  
web-1                 web-7bf54cbc8d-c7z29  web-7bf54cbc8d-jbp8z  
[root@k8s-master-4 ~]# kubectl exec -it web-7bf54cbc8d-c7z29 -- cat /usr/share/nginx/html/index.html
hello rook
[root@k8s-master-4 ~]# kubectl exec -it web-7bf54cbc8d-gjr5r -- cat /usr/share/nginx/html/index.html
hello rook
[root@k8s-master-4 ~]# kubectl exec -it web-7bf54cbc8d-jbp8z -- cat /usr/share/nginx/html/index.html
hello rook

9、PVC动态扩容

  • 文件共享类型的PVC扩容需要k8s 1.15+
  • 块存储类型的PVC扩容需要k8s 1.16+

PVC扩容需要开启ExpandCSIVolumes,新版本的k8s已经默认打开了这个功能,可以查看自己的k8s版本是否已经默认打开了该功能:

如果default为true就不需要打开此功能,如果default为false,需要开启该功能。

img

  • storageclass也必须支持动态扩容

img

pvc动态扩容

扩容前此pvc是1G(扩容之前没截图),扩容后是5G

img

img

img

10、PVC快照(块存储快照)

k8s 1.20版本,snapshot已经进入到了GA版本。

PVC的快照类似VM的快照,也就是对当前的数据进行一份复制,达到数据备份的目的。

snapshotclass.yaml

# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-rbdplugin-snapclass
driver: rook-ceph.rbd.csi.ceph.com # driver:namespace:operator
parameters:
  # Specify a string that identifies your cluster. Ceph CSI supports any
  # unique string. When Ceph CSI is deployed by Rook use the Rook namespace,
  # for example "rook-ceph".
  clusterID: rook-ceph # namespace:cluster
  csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph # namespace:cluster
deletionPolicy: Delete

snapshot.yaml

# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: rbd-pvc-snapshot
spec:
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  source:
    persistentVolumeClaimName: www-web-0

snapsho定义了类型为VolumeSnapshot,来源是已经存在的一个PVC,同时注意:PVC是有namespace区分的,所以必须在同一个namespace中,不然无法Bound。

创建快照

kubectl apply -f snapshotclass.yaml
kubectl apply -f snapshot.yaml

kubectl get volumesnapshot
[root@k8s-master-4 ~/rook/cluster/examples/kubernetes/ceph/csi/rbd]# kubectl get volumesnapshot
NAME               READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS             SNAPSHOTCONTENT                                    CREATIONTIME   AGE
rbd-pvc-snapshot   true         www-web-0                           1Gi           csi-rbdplugin-snapclass   snapcontent-a288e630-226e-47bb-835a-a923625bb843   25s            26s

指定快照创建pvc

pvc-restore.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-restore
spec:
  storageClassName: rook-ceph-block
  dataSource:
    name: rbd-pvc-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io/v1
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

这里做了一点小小的改动,将“snapshot.storage.k8s.io”改为“snapshot.storage.k8s.io/v1”,不然会报错,具体还是根据api的版本来。

img

  • 20
    点赞
  • 27
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值