关于k8s存储类的“Delete“和“Retain“

关于k8s存储类

采用nfs类型存储类

准备基础环境

准备好现有的k8s环境

创建nfs存储类

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 20.88.10.31 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 20.88.10.31 ## 指定自己nfs服务器地址
            path: /nfs/data ## nfs服务器共享的目录
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

创建服务

# 创建一个Pod
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: html
      containers:
        - name: nginx
          image: 'nginx:1.14.2'
          ports:
            - containerPort: 80
              protocol: TCP
          volumeMounts:
            - name: html
              mountPath: /usr/share/nginx/html
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
---
# 创建pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: html
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs-storage
  volumeMode: Filesystem
---
# 创建service
kind: Service
apiVersion: v1
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
    - name: web
      protocol: TCP
      port: 80
      targetPort: web
  selector:
    app: nginx
  # 采用LoadBalancer类型
  type: LoadBalancer
  sessionAffinity: ClientIP
  externalTrafficPolicy: Cluster
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800

准备一个index.html文件

<html>
  <head>
    <h1>hello,world!</h1>
  </head>
</html>

确保页面可以访问即可.

开始操作(nfs存储类部分)

persistentVolumeReclaimPolicy: Delete

pv策略使用默认,为persistentVolumeReclaimPolicy: Delete

先将Pod数量改为0,删除对应的PVC,然后重新使用上面的编排文件创建一个新的PVC.

原有的数据在容器中找不到了,但是NFS中依然有存储,名称为:archived-名称空间-卷名称,里面是备份数据.同时创建了一个新的PV.
在这里插入图片描述
经过测试,哪怕是一个空卷被删除,该目录下同样会有备份
在这里插入图片描述

原来的PV在主机对应的目录同样被删除了,留下是备份,因为配置文件中开启了备份

# 开启备份策略的参数
parameters:
  archiveOnDelete: "true"

persistentVolumeReclaimPolicy: Retain

创建存储类文件改为

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份
reclaimPolicy: Recycle ## 指定存储类默认规则

删除原有的存储类,然后重新创建

删除存储类同样没有删除主机上的文件

将存储类改为persistentVolumeReclaimPolicy: Retain后重新创建PVC和Pod,朝目录中写入数据.

然后删除Pod,删除PVC.这个时候看一下PV,并没有像上次之前同样被删掉,而是变成了Released状态!
在这里插入图片描述
同时,主机上依然存在这个对应的PVC目录
在这里插入图片描述
这时候我们重新创建一个PVC,依旧使用相同的配置文件
在这里插入图片描述
原来的PV依旧存在处于Released状态,PVC自动重新创建了一个PV不用想都知道,新建的这个里面肯定是空的.

然后我们删除PVC,同时制定使用第一次创建出来的PV.

# 使用指定的PV
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: html
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs-storage
  ## 制定PV的名字
  volumeName: pvc-ad855ee0-3013-47e6-bd95-bd95394db606
  volumeMode: Filesystem

但是,现在创建出来的PVC一直处于pending状态,因为PV是release状态,需要手动干预将其改为Available状态

kubectl edit pv/pvc-ad855ee0-3013-47e6-bd95-bd95394db606
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  # 删除这个标签下的所有
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: html
    namespace: default
    resourceVersion: "494860"
    uid: ad855ee0-3013-47e6-bd95-bd95394db606

重新查看

[root@h1 ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM          STORAGECLASS   REASON   AGE
pvc-97b15824-54d0-4266-8bbf-094fb6a133ba   1Gi        RWX            Retain           Released    default/html   nfs-storage             38m
pvc-ad855ee0-3013-47e6-bd95-bd95394db606   1Gi        RWX            Retain           Available                  nfs-storage             44m
[root@h1 ~]# 

PVC也绑定上了
在这里插入图片描述
进入Pod查看数据,数据还在.

存储类总结

存储类只支持两种类型,DeleteRetain.

The StorageClass "nfs-storage" is invalid: reclaimPolicy: Unsupported value: "Recycle": supported values: "Delete", "Retain"

如果在创建存储类时通过archiveOnDelete: "true"参数指定备份,那么删除PVC的时候,PV也会被删除但是主机上存储的数据不会删除,而会重命名为一个archived开头的目录.
如果没有设置这一条,那么删除PVC的时候,数据就真的全部被删掉了.
如果persistentVolumeReclaimPolicy设为了Retain,那么删除PVC后PV不会被删除,这个时候的PV处于Released状态,需要管理员手动干预修改配置,才能恢复到Available状态并被重新可绑定,同时,PV如果处于Released状态,那么他是不可用的.

PV处于Released状态时,需要手动干预,才能恢复到Available状态或者被删除.

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
PV(PersistentVolume)和PVC(PersistentVolumeClaim)是Kubernetes中用于实现持久化存储的重要概念。 PV是集群中的一块存储,可以是NFS、iSCSI、本地存储等,由管理员进行配置或使用存储进行动态配置。PV定义了存储的容量、访问模式、持久化存储型等属性。PV的生命周期是独立于Pod的,即使Pod被删除,PV仍然存在,可以被其他Pod继续使用。 PVC是一个持久化存储卷,用于访问各种型的持久化存储,如本地存储、网络存储、云存储等。PVC的使用使应用程序更加灵活和可移植,同时也提高了存储资源的利用率。PVC和PV是一一对应的关系,即一个PVC只能绑定一个PV,而一个PV也只能被一个PVC绑定。 下面是一个演示k8s持久化存储PV和PVC的案例: 1. 创建PV: ```yaml apiVersion: v1 kind: PersistentVolume metadata: name: my-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: my-storage-class nfs: path: /data server: nfs-server-ip ``` 2. 创建PVC: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: my-storage-class ``` 3. 创建Pod,并挂载PVC卷: ```yaml apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: my-container image: nginx volumeMounts: - name: my-volume mountPath: /data volumes: - name: my-volume persistentVolumeClaim: claimName: my-pvc ``` 4. 删除PVC的正确步骤: ```shell kubectl delete pod my-pod kubectl delete pvc my-pvc kubectl delete pv my-pv ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值