目录
关于k8s存储类
采用nfs类型存储类
准备基础环境
准备好现有的k8s环境
创建nfs存储类
## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
# resources:
# limits:
# cpu: 10m
# requests:
# cpu: 10m
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 20.88.10.31 ## 指定自己nfs服务器地址
- name: NFS_PATH
value: /nfs/data ## nfs服务器共享的目录
volumes:
- name: nfs-client-root
nfs:
server: 20.88.10.31 ## 指定自己nfs服务器地址
path: /nfs/data ## nfs服务器共享的目录
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
创建服务
# 创建一个Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
volumes:
- name: html
persistentVolumeClaim:
claimName: html
containers:
- name: nginx
image: 'nginx:1.14.2'
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
---
# 创建pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: html
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs-storage
volumeMode: Filesystem
---
# 创建service
kind: Service
apiVersion: v1
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: web
protocol: TCP
port: 80
targetPort: web
selector:
app: nginx
# 采用LoadBalancer类型
type: LoadBalancer
sessionAffinity: ClientIP
externalTrafficPolicy: Cluster
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
准备一个index.html文件
<html>
<head>
<h1>hello,world!</h1>
</head>
</html>
确保页面可以访问即可.
开始操作(nfs存储类部分)
persistentVolumeReclaimPolicy: Delete
pv策略使用默认,为persistentVolumeReclaimPolicy: Delete
先将Pod数量改为0,删除对应的PVC,然后重新使用上面的编排文件创建一个新的PVC.
原有的数据在容器中找不到了,但是NFS中依然有存储,名称为:archived-名称空间-卷名称,里面是备份数据.同时创建了一个新的PV.
经过测试,哪怕是一个空卷被删除,该目录下同样会有备份
原来的PV在主机对应的目录同样被删除了,留下是备份,因为配置文件中开启了备份
# 开启备份策略的参数
parameters:
archiveOnDelete: "true"
persistentVolumeReclaimPolicy: Retain
创建存储类文件改为
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
reclaimPolicy: Recycle ## 指定存储类默认规则
删除原有的存储类,然后重新创建
删除存储类同样没有删除主机上的文件
将存储类改为persistentVolumeReclaimPolicy: Retain后重新创建PVC和Pod,朝目录中写入数据.
然后删除Pod,删除PVC.这个时候看一下PV,并没有像上次之前同样被删掉,而是变成了Released状态!
同时,主机上依然存在这个对应的PVC目录
这时候我们重新创建一个PVC,依旧使用相同的配置文件
原来的PV依旧存在处于Released状态,PVC自动重新创建了一个PV不用想都知道,新建的这个里面肯定是空的.
然后我们删除PVC,同时制定使用第一次创建出来的PV.
# 使用指定的PV
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: html
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs-storage
## 制定PV的名字
volumeName: pvc-ad855ee0-3013-47e6-bd95-bd95394db606
volumeMode: Filesystem
但是,现在创建出来的PVC一直处于pending
状态,因为PV是release状态,需要手动干预将其改为Available
状态
kubectl edit pv/pvc-ad855ee0-3013-47e6-bd95-bd95394db606
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
# 删除这个标签下的所有
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: html
namespace: default
resourceVersion: "494860"
uid: ad855ee0-3013-47e6-bd95-bd95394db606
重新查看
[root@h1 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-97b15824-54d0-4266-8bbf-094fb6a133ba 1Gi RWX Retain Released default/html nfs-storage 38m
pvc-ad855ee0-3013-47e6-bd95-bd95394db606 1Gi RWX Retain Available nfs-storage 44m
[root@h1 ~]#
PVC也绑定上了
进入Pod查看数据,数据还在.
存储类总结
存储类只支持两种类型,Delete
和Retain
.
The StorageClass "nfs-storage" is invalid: reclaimPolicy: Unsupported value: "Recycle": supported values: "Delete", "Retain"
如果在创建存储类时通过archiveOnDelete: "true"
参数指定备份,那么删除PVC的时候,PV也会被删除但是主机上存储的数据不会删除,而会重命名为一个archived
开头的目录.
如果没有设置这一条,那么删除PVC的时候,数据就真的全部被删掉了.
如果persistentVolumeReclaimPolicy
设为了Retain
,那么删除PVC后PV不会被删除,这个时候的PV处于Released
状态,需要管理员手动干预修改配置,才能恢复到Available
状态并被重新可绑定,同时,PV如果处于Released
状态,那么他是不可用的.
PV处于Released状态时,需要手动干预,才能恢复到Available状态或者被删除.