kubernetes静态申请nfs存储卷
动态申请存储卷请参考:
https://blog.csdn.net/networken/article/details/86697018
搭建NFS服务器
选择一个节点部署nfs server,这里临时部署在master01节点
#安装nfs
yum -y install nfs-utils
#创建目录,并修改权限
mkdir -p /nfsshare
chmod -R 777 /nfsshare
#编辑export文件,更新配置并查看
cat >> /etc/exports << EOF
/nfsshare *(rw,no_root_squash,sync)
EOF
exportfs -arv
exportfs -s
#启动rpcbind、nfs服务
systemctl enable --now rpcbind
systemctl enable --now nfs-server
#所有node节点安装nfs客户端工具
yum -y install nfs-utils
客户端查看服务端共享的文件系统信息
#showmount测试
showmount -e 192.168.92.56
作为准备工作,我们已经在 k8s-master 节点上搭建了一个 NFS 服务器,目录为 /nfs/data.
添加pv卷对应目录,这里创建2个pv卷,则添加2个pv卷的目录作为挂载点。
#创建pv卷对应的目录
mkdir -p /nfs/data/pv001
mkdir -p /nfs/data/pv002
#配置exportrs
vim /etc/exports
/nfs/data *(rw,no_root_squash,sync)
/nfs/data/pv001 *(rw,no_root_squash,sync)
/nfs/data/pv002 *(rw,no_root_squash,sync)
#配置生效
exportfs -avr
#重启rpcbind、nfs服务
systemctl restart rpcbind && systemctl restart nfs
创建PV
下面创建2个名为pv001和pv002的PV卷,配置文件 nfs-pv001.yaml 如下:
[centos@k8s-master ~]$ vim nfs-pv001.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv001
labels:
pv: nfs-pv001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfs/data/pv001
server: 192.168.92.56
nfs-pv002.yaml文件如下:
[centos@k8s-master ~]$ vim nfs-pv001.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv002
labels:
pv: nfs-pv002
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfs/data/pv002
server: 192.168.92.56
配置说明:
① capacity 指定 PV 的容量为 1G。
② accessModes 指定访问模式为 ReadWriteOnce,支持的访问模式有:
- ReadWriteOnce – PV 能以 read-write 模式 mount 到单个节点。
- ReadOnlyMany – PV 能以 read-only 模式 mount 到多个节点。
- ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点。
③ persistentVolumeReclaimPolicy 指定当 PV 的回收策略为 Recycle,支持的策略有:
- Retain – 需要管理员手工回收。
- Recycle – 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*。
- Delete – 删除 Storage Provider 上的对应存储资源,例如 AWS EBS、GCE PD、Azure
Disk、OpenStack Cinder Volume 等。
④ storageClassName 指定 PV 的 class 为 nfs。相当于为 PV 设置了一个分类,PVC 可以指定 class 申请相应 class 的 PV。
⑤ 指定 PV 在 NFS 服务器上对应的目录。
创建 pv:
[centos@k8s-master ~]$ kubectl apply -f nfs-pv001.yaml
persistentvolume/nfs-pv001 created
[centos@k8s-master ~]$ kubectl apply -f nfs-pv002.yaml
persistentvolume/nfs-pv002 created
[centos@k8s-master ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv001 1Gi RWO Recycle Available nfs 4s
nfs-pv002 1Gi RWO Recycle Available nfs 2s
[centos@k8s-master ~]$
STATUS 为 Available,表示 pv就绪,可以被 PVC 申请。
创建PVC
接下来创建一个名为pvc001和pvc002的PVC,配置文件 nfs-pvc001.yaml 如下:
[centos@k8s-master ~]$ vim nfs-pvc001.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
selector:
matchLabels:
pv: nfs-pv001
nfs-pvc002.yaml配置文件
[centos@k8s-master ~]$ vim nfs-pvc001.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc002
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
selector:
matchLabels:
pv: nfs-pv002
执行yaml文件创建 pvc:
[centos@k8s-master ~]$ kubectl apply -f nfs-pvc001.yaml
persistentvolumeclaim/nfs-pvc001 created
[centos@k8s-master ~]$ kubectl apply -f nfs-pvc002.yaml
persistentvolumeclaim/nfs-pvc002 created
[centos@k8s-master ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc001 Bound pv001 1Gi RWO nfs 6s
nfs-pvc002 Bound pv002 1Gi RWO nfs 3s
[centos@k8s-master ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv001 1Gi RWO Recycle Bound default/pvc001 nfs 9m12s
nfs-pv002 1Gi RWO Recycle Bound default/pvc002 nfs 9m10s
[centos@k8s-master ~]$
从 kubectl get pvc 和 kubectl get pv 的输出可以看到 pvc001 和pvc002分别绑定到pv001和pv002,申请成功。注意pvc绑定到对应pv通过labels标签方式实现,也可以不指定,将随机绑定到pv。
接下来就可以在 Pod 中使用存储了,Pod 配置文件 nfs-pod001.yaml 如下:
[centos@k8s-master ~]$ vim nfs-pod001.yaml
kind: Pod
apiVersion: v1
metadata:
name: nfs-pod001
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: nfs-pv001
volumes:
- name: nfs-pv001
persistentVolumeClaim:
claimName: nfs-pvc001
nfs-pod002.yaml 如下:
[centos@k8s-master ~]$ vim nfs-pod002.yaml
kind: Pod
apiVersion: v1
metadata:
name: nfs-pod002
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: nfs-pv002
volumes:
- name: nfs-pv002
persistentVolumeClaim:
claimName: nfs-pvc002
与使用普通 Volume 的格式类似,在 volumes 中通过 persistentVolumeClaim 指定使用nfs-pvc001和nfs-pvc002申请的 Volume。
执行yaml文件创建nfs-pdo001和nfs-pod002:
[centos@k8s-master ~]$ kubectl apply -f nfs-pod001.yaml
pod/nfs-pod001 created
[centos@k8s-master ~]$ kubectl apply -f nfs-pod002.yaml
pod/nfs-pod002 created
[centos@k8s-master ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-75bf876d88-sqqpv 1/1 Running 0 25m
nfs-pod001 1/1 Running 0 12s
nfs-pod002 1/1 Running 0 9s
[centos@k8s-master ~]$
验证 PV 是否可用:
[centos@k8s-master ~]$ kubectl exec nfs-pod001 touch /var/www/html/index001.html
[centos@k8s-master ~]$ kubectl exec nfs-pod002 touch /var/www/html/index002.html
[centos@k8s-master ~]$ ls /nfs/data/pv001/
index001.html
[centos@k8s-master ~]$ ls /nfs/data/pv002/
index002.html
[centos@k8s-master ~]$
进入pod查看挂载情况
[centos@k8s-master ~]$ kubectl exec -it nfs-pod001 /bin/bash
root@nfs-pod001:/# df -h
......
192.168.92.56:/nfs/data/pv001 47G 5.2G 42G 11% /var/www/html
......
root@nfs-pod001:/#
删除pv
删除pod,pv和pvc不会被删除,nfs存储的数据不会被删除。
[centos@k8s-master ~]$ kubectl delete -f nfs-pod001.yaml
pod "nfs-pod001" deleted
[centos@k8s-master ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv001 1Gi RWO Recycle Bound default/pvc001 nfs 34m
nfs-pv002 1Gi RWO Recycle Bound default/pvc002 nfs 34m
[centos@k8s-master ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc001 Bound pv001 1Gi RWO nfs 25m
nfs-pvc002 Bound pv002 1Gi RWO nfs 25m
[centos@k8s-master ~]$ ls /nfs/data/pv001/
index001.html
[centos@k8s-master ~]$
继续删除pvc,pv将被释放,处于 Available 可用状态,并且nfs存储中的数据被删除。
[centos@k8s-master ~]$ kubectl delete -f nfs-pvc001.yaml
persistentvolumeclaim "nfs-pvc001" deleted
[centos@k8s-master ~]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv001 1Gi RWO Recycle Available nfs 35m
nfs-pv002 1Gi RWO Recycle Bound default/pvc002 nfs 35m
[centos@k8s-master ~]$ ls /nfs/data/pv001/
[centos@k8s-master ~]$
继续删除pv
[centos@k8s-master ~]$ kubectl delete -f nfs-pv001.yaml
persistentvolume "pv001" deleted