Persistent Volume
1、概念
PersistentVolume (PV)
是由管理员设置的存储,它是群集的一部分。就像节点是集群中的资源一样,PV 也是集群中的资源。 PV 是Volume 之类的卷插件,但具有独立于使用 PV 的 Pod 的生命周期。此 API 对象包含存储实现的细节,即 NFS、iSCSI 或特定于云供应商的存储系统PersistentVolumeClaim (PVC)是用户存储的请求。它与 Pod 相似。Pod 消耗节点资源,PVC 消耗 PV 资源。Pod 可以请求特定级别的资源(CPU 和内存)。声明可以请求特定的大小和访问模式(例如,可以以读/写一次或 只读多次模式挂载)
静态 pv
集群管理员创建一些 PV。它们带有可供群集用户使用的实际存储的细节。它们存在于 Kubernetes API 中,可用于消费
动态
当管理员创建的静态 PV 都不匹配用户的 PersistentVolumeClaim 时,集群可能会尝试动态地为 PVC 创建卷。此配置基于 StorageClasses :PVC 必须请求 [存储类],并且管理员必须创建并配置该类才能进行动态创建。声明该类为 “” 可以有效地禁用其动态配置要启用基于存储级别的动态存储配置,集群管理员需要启用 API server 上的 DefaultStorageClass [准入控制器]。例如,通过确保 DefaultStorageClass 位于 API server 组件的 --admission-control 标志,使用逗号分隔的有序值列表中,可以完成此操作
绑定
master 中的控制环路监视新的 PVC,寻找匹配的 PV(如果可能),并将它们绑定在一起。如果为新的 PVC 动态调配 PV,则该环路将始终将该 PV 绑定到 PVC。否则,用户总会得到他们所请求的存储,但是容量可能超出要求的数量。一旦 PV 和 PVC 绑定后, PersistentVolumeClaim 绑定是排他性的,不管它们是如何绑定的。 PVC 跟PV 绑定是一对一的映射
2、持久化演示说明 - NFS
Ⅰ、在 Harbor 节点安装 NFS 服务器
使用 Harbor 节点来作为存 NFS 储服务器。
[root@hub harbor]# yum install -y nfs-common nfs-utils rpcbind
# 创建存储目录 /nfsdata1 /nfsdata2 /nfsdata3 /nfsdata4
[root@hub harbor]# mkdir /nfsdata{1...4}
# 查看创建好的目录
[root@hub ~]# cd /
[root@hub /]# ls
bin boot data dev etc home lib lib64 media mnt nfsdata1 nfsdata2 nfsdata3 nfsdata4 opt proc root run sbin srv sys tmp usr var
# 授权
[root@hub /]# chomd 777 nfsdata1 nfsdata2 nfsdata3 nfsdata4
# 更改属组
[root@hub /]# chown nfsnobody nfsdata1 nfsdata2 nfsdata3 nfsdata4
# 设置共享目录
[root@hub ~]# vim /etc/exports
[root@hub /]# cat /etc/exports
/nfsdata1 *(rw,no_root_squash,no_all_squash,sync)
/nfsdata2 *(rw,no_root_squash,no_all_squash,sync)
/nfsdata3 *(rw,no_root_squash,no_all_squash,sync)
/nfsdata4 *(rw,no_root_squash,no_all_squash,sync)
[root@hub /]# systemctl start rpcbind
[root@hub /]# systemctl start nfs
# 修改 /etc/exports 文件后,可以重启生效
[root@hub /]# systemctl restart nfs
在master1、node1、node2 上安装 nfs
# 在所有节点安装
[root@k8s-master01 pv]# yum install -y nfs-utils rpcbin
挂载测试
# 创建目录
[root@k8s-master01 pv]# mkdir /nfstest1
# 查看共享目录
[root@k8s-master01 ~]# showmount -e 192.168.1.100
Export list for 192.168.1.100:
/nfsdata1 *
# 挂载目录
[root@k8s-master01 ~]# mount -t nfs 192.168.1.100:/nfsdata1 /nfstest1
[root@k8s-master01 ~]# cd /nfstest1/
[root@k8s-master01 nfstest]# vim 1.html
# 到harbor上查看,确认挂载没有问题
[root@hub ~]# ls /nfsdata1/
1.html
# 先离开nfstest目录,然后解除挂载
[root@k8s-master01 nfstest]# cd
[root@k8s-master01 ~]# umount /nfstest1/
[root@k8s-master01 nfstest]# rm -rf /nfstest1/
Ⅱ、部署 PV
分别创建名称为nfspv1
、nfspv2
、nfspv3
、nfspv4
的pv
,然后绑定192.168.1.100
服务器nfs
共享目录 /nfsdata1
、/nfsdata2
、/nfsdata3
、/nfsdata4
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata1
server: 192.168.1.100
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv2
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata2
server: 192.168.1.100
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv3
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
nfs:
path: /nfsdata3
server: 192.168.1.100
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfspv4
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata4
server: 192.168.1.100
创建pv
[root@k8s-master01 pv]# vim pv.yaml
[root@k8s-master01 pv]# kubectl apply -f pv.yaml
persistentvolume/nfspv1 created
persistentvolume/nfspv2 created
persistentvolume/nfspv3 created
persistentvolume/nfspv4 created
[root@k8s-master01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 1Gi RWO Retain Available nfs 6s
nfspv2 2Gi RWO Retain Available nfs 6s
nfspv3 1Gi RWO Retain Available slow 6s
nfspv4 3Gi RWO Retain Available nfs 6s
Ⅲ、创建服务并使用 PVC
使用 StatefulSet
必须绑定 无头服务
,所以我们要先创建无头服务,然后创建 StatefulSet 。
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: wangyanglinux/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs"
resources:
requests:
storage: 1Gi
pv
会和 pvc
进行匹配,匹配后进行绑定。nfspv1
绑定到了 www-web-0
。
[root@k8s-master01 pv]# kubectl apply -f pod.yaml
[root@k8s-master01 pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m39s
web-1 1/1 Running 0 3m19s
web-2 1/1 Running 0 3m17s
[root@k8s-master01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 1Gi RWO Retain Bound default/www-web-0 nfs 97s
nfspv2 2Gi RWO Retain Bound default/www-web-1 nfs 3m51s
nfspv3 1Gi RWO Retain Available slow 3m51s
nfspv4 3Gi RWO Retain Bound default/www-web-2 nfs 3m51s
[root@k8s-master01 pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound nfspv1 1Gi RWO nfs 21m
www-web-1 Bound nfspv2 2Gi RWO nfs 11m
www-web-2 Bound nfspv4 3Gi RWO nfs 11m
查看 nfspv1
绑定的目录为 /nfsdata1
[root@k8s-master01 pv]# kubectl describe pv nfspv1
Name: nfspv1
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"nfspv1"},"spec":{"accessModes":["ReadWriteOnce"],"capaci...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: nfs
Status: Bound
Claim: default/www-web-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.1.100
Path: /nfsdata1
ReadOnly: false
Events: <none>
在 harbor
节点的 /nfsdata1
目录下,创建 index.html
内容为"nfsdata1"。
[root@hub /]# echo nfsdata1>/nfsdata1/index.html
# 查看pod中挂载的index.html
[root@k8s-master01 pv]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 22m 10.244.2.139 k8s-node02 <none> <none>
web-1 1/1 Running 0 22m 10.244.1.139 k8s-node01 <none> <none>
web-2 1/1 Running 0 22m 10.244.2.140 k8s-node02 <none> <none>
[root@k8s-master01 pv]# curl 10.244.2.139
nfsdata1
# 测试删除pod,文件不会丢失
[root@k8s-master01 pv]# kubectl delete pod web-0
pod "web-0" deleted
[root@k8s-master01 pv]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 7s 10.244.2.141 k8s-node02 <none> <none>
web-1 1/1 Running 0 33m 10.244.1.139 k8s-node01 <none> <none>
web-2 1/1 Running 0 33m 10.244.2.140 k8s-node02 <none> <none>
[root@k8s-master01 pv]# curl 10.244.2.141
nfsdata1
3、关于 StatefulSet
- 匹配 Pod name ( 网络标识 ) 的模式为: ( s t a t e f u l s e t 名 称 ) − (statefulset名称)- (statefulset名称)−(序号),比如上面的示例:web-0,web-1,web-2
- StatefulSet 为每个 Pod 副本创建了一个 DNS 域名,这个域名的格式为: $(podname).(headless servername),也就意味着服务间是通过Pod域名来通信而非 Pod IP,因为当Pod所在Node发生故障时, Pod 会被飘移到其它 Node 上,Pod IP 会发生变化,但是 Pod 域名不会有变化
- StatefulSet 使用 Headless 服务来控制 Pod 的域名,这个域名的 FQDN 为: ( s e r v i c e n a m e ) . (servicename). (servicename).(namespace).svc.cluster.local,其中,“cluster.local” 指的是集群的域名
- 根据 volumeClaimTemplates,为每个 Pod 创建一个 pvc,pvc 的命名规则匹配模式:(volumeClaimTemplates.name)-(pod_name),比如上面的 volumeMounts.name=www, Podname=web-[0-2],因此创建出来的 PVC 是 www-web-0、www-web-1、www-web-2
- 删除 Pod 不会删除其 pvc,手动删除 pvc 将自动释放 pv
验证 Pod 的 DNS 域名解析
[root@k8s-master01 pv]# kubectl exec -it web-1 /bin/sh
/ # ping web-0.nginx
PING web-0.nginx (10.244.2.141): 56 data bytes
64 bytes from 10.244.2.141: seq=0 ttl=62 time=1.243 ms
64 bytes from 10.244.2.141: seq=1 ttl=62 time=0.939 ms
64 bytes from 10.244.2.141: seq=2 ttl=62 time=2.056 ms
[root@k8s-master01 pv]# dig -t A nginx.default.svc.cluster.local. @10.244.0.68
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.5 <<>> -t A nginx.default.svc.cluster.local. @10.244.0.68
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62409
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx.default.svc.cluster.local. IN A
;; ANSWER SECTION:
nginx.default.svc.cluster.local. 30 IN A 10.244.2.141
nginx.default.svc.cluster.local. 30 IN A 10.244.1.139
nginx.default.svc.cluster.local. 30 IN A 10.244.2.140
;; Query time: 1 msec
;; SERVER: 10.244.0.68#53(10.244.0.68)
;; WHEN: Sat Nov 27 22:40:22 CST 2021
;; MSG SIZE rcvd: 201
Statefulset的启停顺序:
- 有序部署:部署StatefulSet时,如果有多个Pod副本,它们会被顺序地创建(从0到N-1)并且,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态。
- 有序删除:当Pod被删除时,它们被终止的顺序是从N-1到0。
- 有序扩展:当对Pod执行扩展操作时,与部署一样,它前面的Pod必须都处于Running和Ready状态。
StatefulSet使用场景:
- 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于 PVC 来实现。
- 稳定的网络标识符,即 Pod 重新调度后其 PodName 和 HostName 不变。
- 有序部署,有序扩展,基于 init containers 来实现。
- 有序收缩。
4、删除资源
[root@k8s-master01 pv]# kubectl delete -f pod.yaml
service "nginx" deleted
statefulset.apps "web" deleted
[root@k8s-master01 pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound nfspv1 1Gi RWO nfs 130m
www-web-1 Bound nfspv2 2Gi RWO nfs 120m
www-web-2 Bound nfspv4 3Gi RWO nfs 120m
[root@k8s-master01 pv]# kubectl delete pvc --all
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
# 查看pv状态,已经解除绑定,但是还没有被释放资源
[root@k8s-master01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 1Gi RWO Retain Released default/www-web-0 nfs 121m
nfspv2 2Gi RWO Retain Released default/www-web-1 nfs 123m
nfspv3 1Gi RWO Retain Available slow 123m
nfspv4 3Gi RWO Retain Released default/www-web-2 nfs 123m
# 查看 nfspv1 的 claimRef 描述里依然有一个使用者的连接信息
[root@k8s-master01 pv]# kubectl get pv nfspv1 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"nfspv1"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"1Gi"},"nfs":{"path":"/nfsdata1","server":"192.168.1.100"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"nfs"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2021-11-27T12:57:15Z"
finalizers:
- kubernetes.io/pv-protection
name: nfspv1
resourceVersion: "649701"
selfLink: /api/v1/persistentvolumes/nfspv1
uid: ebe16422-ba21-4ac0-a8bc-b690f5f251a6
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: www-web-0
namespace: default
resourceVersion: "637996"
uid: 07ea5dad-3b06-4b99-a92a-c09f69a4465f
nfs:
path: /nfsdata1
server: 192.168.1.100
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
volumeMode: Filesystem
status:
phase: Released
# 编辑nfspv1,删除 claimRef
[root@k8s-master01 pv]# kubectl edit pv nfspv1
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"nfspv1"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"1Gi"},"nfs":{"path":"/nfsdata1","server":"192.168.1.100"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"nfs"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2021-11-27T12:57:15Z"
finalizers:
- kubernetes.io/pv-protection
name: nfspv1
resourceVersion: "649701"
selfLink: /api/v1/persistentvolumes/nfspv1
uid: ebe16422-ba21-4ac0-a8bc-b690f5f251a6
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
nfs:
path: /nfsdata1
server: 192.168.1.100
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
volumeMode: Filesystem
status:
phase: Released
# 再次查看 nfspv1 状态,连接者的信息已经被删除
[root@k8s-master01 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfspv1 1Gi RWO Retain Available nfs 130m
nfspv2 2Gi RWO Retain Released default/www-web-1 nfs 132m
nfspv3 1Gi RWO Retain Available slow 132m
nfspv4 3Gi RWO Retain Released default/www-web-2 nfs 132m
Kuberntes 中无法删除 PV 的解决方法
$ kubectl patch pv nfspv1 -p '{"metadata":{"finalizers":null}}'
persistentvolume/nfspv1 patched
$ kubectl get pv
No resources found.
————————————————
版权声明:本文为CSDN博主「黑色小米粥」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_42881588/article/details/104875436