kubernetes部署nfs持久存储(静态和动态)
NFS简介
NFS是网络文件系统Network File System的缩写,NFS服务器可以让PC将网络中的NFS服务器共享的目录挂载到本地的文件系统中,而在本地的系统中来看,那个远程主机的目录就好像是自己的一个磁盘分区一样。
kubernetes使用NFS共享存储有两种方式
1.手动方式静态创建所需要的PV和PVC。
2.通过创建PVC动态地创建对应PV,无需手动创建PV。
部署NFS
集群Masrer节点192.168.5.11
作为NFS server服务器,这里作为测试,使用docker部署NFS服务器:
docker run -d --name nfs-server \
--privileged \
--restart always \
-p 2049:2049 \
-v /nfs-share:/nfs-share \
-e SHARED_DIRECTORY=/nfs-share \
itsthenetwork/nfs-server-alpine:latest
手动方式部署nfs服务器
#master节点安装nfs
yum -y install nfs-utils
#创建nfs目录
mkdir -p /nfs/data/
#修改权限
chmod -R 777 /nfs/data
#编辑export文件,这个文件就是nfs默认的配置文件
vim /etc/exports
/nfs/data *(rw,no_root_squash,sync)
#配置生效
exportfs -r
#查看生效
exportfs
#启动rpcbind、nfs服务
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
#查看 RPC 服务的注册状况
rpcinfo -p localhost
#showmount测试
showmount -e 192.168.5.11
所有node节点安装客户端,开机启动
yum -y install nfs-utils
systemctl start nfs && systemctl enable nfs
准备工作,我们已经在master-1(192.168.5.11) 节点上搭建了一个 NFS 服务器,目录为 /nfs/data
静态PV卷
添加pv卷对应目录,这里创建1个pv卷,则添加1个pv卷的目录作为挂载点。
创建NFS挂载点
#创建pv卷对应的目录
mkdir -p /nfs/data/pv001
mkdir -p /nfs/data/pv001
#配置exportrs(我觉得可以不用这步,因为父目录/nfs/data,已经设为共享文件夹)
vim /etc/exports
/nfs/data/pv001 *(rw,no_root_squash,sync)
/nfs/data/pv002 *(rw,no_root_squash,sync)
#配置生效
exportfs -r
#重启rpcbind、nfs服务
systemctl restart rpcbind && systemctl restart nfs
创建PV–nfs-pv001.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv001
labels:
pv: nfs-pv001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfs/data/pv001
server: 192.168.5.11
配置说明:
① capacity 指定 PV 的容量为 1G。
② accessModes 指定访问模式为 ReadWriteOnce,支持的访问模式有:
2.1ReadWriteOnce – PV 能以 read-write 模式 mount 到单个节点。
2.2ReadOnlyMany – PV 能以 read-only 模式 mount 到多个节点。
2.3ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点。
③ persistentVolumeReclaimPolicy 指定当 PV 的回收策略为 Recycle,支持的策略有:
3.1Retain – 需要管理员手工回收。
3.2Recycle – 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*。
3.3Delete – 删除 Storage Provider 上的对应存储资源,例如 AWS EBS、GCE PD、Azure
Disk、OpenStack Cinder Volume 等。
④ storageClassName 指定 PV 的 class 为 nfs。相当于为 PV 设置了一个分类,PVC 可以指定 class 申请相应 class 的 PV。
⑤ 指定 PV 在 NFS 服务器上对应的目录。
创建PV
[root@master-1 pv]# kubectl apply -f nfs-pv001.yml
persistentvolume/nfs-pv001 created
[root@master-1 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv001 1Gi RWO Recycle Available nfs 7s
创建PVC–nfs-pvc001.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
selector:
matchLabels:
pv: nfs-pv001
创建PVC
[root@master-1 pv]# kubectl apply -f nfs-pvc001.yml
persistentvolumeclaim/nfs-pvc001 created
[root@master-1 pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc001 Bound nfs-pv001 1Gi RWO nfs 7s
[root@master-1 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv001 1Gi RWO Recycle Bound ingress-nginx/nfs-pvc001 nfs 26m
从 kubectl get pvc 和 kubectl get pv 的输出可以看到 pvc001绑定到pv001,申请成功。注意pvc绑定到对应pv通过labels标签方式实现,也可以不指定,将随机绑定到pv。
Pod中使用存储–nfs-pod001.yml
apiVersion: v1
kind: Pod
metadata:
name: nfs-pod001
spec:
containers:
- name: frontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: nfs-pv001
volumes:
- name: nfs-pv001
persistentVolumeClaim:
claimName: nfs-pvc001
nginx的pod使用存储
[root@master-1 pv]# kubectl apply -f nfs-pod001.yml
pod/nfs-pod001 created
[root@master-1 pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-pod001 1/1 Running 0 53s
查看PV挂载情况
[root@master-1 pv]# kubectl exec -it nfs-pod001 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nfs-pod001:/# df -h /var/www/html
Filesystem Size Used Avail Use% Mounted on
192.168.5.11:/nfs/data/pv001 50G 4.0G 47G 8% /var/www/html
pv下面创建一个文件
root@nfs-pod001:/var/www/html# echo "hello world!" >/var/www/html/index.html
在master上查看文件
[root@master-1 pv]# ls /nfs/data/pv001/
index.html
删除pod,pv和pvc不会被删除,nfs存储的数据不会被删除
[root@master-1 pv]# kubectl delete -f nfs-pod001.yml
pod "nfs-pod001" deleted
[root@master-1 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv001 1Gi RWO Recycle Bound ingress-nginx/nfs-pvc001 nfs 117m
[root@master-1 pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc001 Bound nfs-pv001 1Gi RWO nfs 93m
[root@master-1 pv]# ls /nfs/data/pv001/
index.html
继续删除pvc,pv将被释放,处于 Available 可用状态,并且nfs存储中的数据被删除。
[root@master-1 pv]# kubectl delete -f nfs-pvc001.yml
persistentvolumeclaim "nfs-pvc001" deleted
[root@master-1 pv]# ls /nfs/data/pv001/
[root@master-1 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv001 1Gi RWO Recycle Available nfs 128m
继续删除pv
[root@master-1 pv]# kubectl delete -f nfs-pv001.yml
persistentvolume "nfs-pv001" deleted
动态PV卷
External NFS驱动的工作原理
K8S的外部NFS驱动,可以按照其工作方式(是作为NFS server还是NFS client)分为两类:
1.nfs-client:
也就是我们接下来演示的这一类,它通过K8S的内置的NFS驱动挂载远端的NFS服务器到本地目录;然后将自身作为storage provider,关联storage class。当用户创建对应的PVC来申请PV时,该provider就将PVC的要求与自身的属性比较,一旦满足就在本地挂载好的NFS目录中创建PV所属的子目录,为Pod提供动态的存储服务。
2.nfs
与nfs-client不同,该驱动并不使用k8s的NFS驱动来挂载远端的NFS到本地再分配,而是直接将本地文件映射到容器内部,然后在容器内使用ganesha.nfsd来对外提供NFS服务;在每次创建PV的时候,直接在本地的NFS根目录中创建对应文件夹,并export出该子目录。
利用NFS动态提供Kubernetes后端存储卷
本文将介绍使用nfs-client-provisioner这个应用,利用NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV。前提条件是有已经安装好的NFS服务器,并且NFS服务器与Kubernetes的Slave节点都能网络连通。将nfs-client驱动做一个deployment部署到K8S集群中,然后对外提供存储服务。
nfs-client-provisioner 是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储.
部署nfs-client-provisioner
(在master上操作,即192.168.5.11)
首先克隆仓库获取yaml文件
git clone https://github.com/kubernetes-incubator/external-storage.git
cp -R external-storage/nfs-client/deploy/ $HOME
cd deploy
创建权限rbac.yaml
# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml
配置NFS-Client provisioner
配置deploy/class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false" # When set to "false" your PVs will not be archived
# by the provisioner upon deletion of the PVC.
修改deployment.yaml文件
这里修改的参数包括NFS服务器所在的IP地址(192.168.5.11),以及NFS服务器共享的路径(/nfs/data/pv002),两处都需要修改为你实际的NFS服务器和共享目录。
[root@master-1 deploy]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.5.11
- name: NFS_PATH
value: /nfs/data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.5.11
path: /nfs/data
测试环境
部署
kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml
检查NFS Server 文件是否成功,下面情况说明成功了。
[root@master-1 nfs-client]# cd /nfs/data/pv002
[root@master-1 pv002]# ls
archived-ingress-nginx-test-claim-pvc-efd702ba-102d-4d61-9a5f-6fc4805f12c0
查看pvc,pv状态
[root@master-1 pv002]# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-claim Bound ingress-nginx-test-claim-pvc-efd702ba-102d-4d61-9a5f-6fc4805f12c0 1Mi RWX managed-nfs-storage 25m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/ingress-nginx-test-claim-pvc-efd702ba-102d-4d61-9a5f-6fc4805f12c0 1Mi RWX Delete Bound ingress-nginx/test-claim managed-nfs-storage 19m
删除
kubectl delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml
部署自己的PVC和pod
确保storage-class是正确的,在deploy/class.yaml中定义的。
apiVersion: v1
metadata:
name: sc-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-sc-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: sc-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
问题: 创建pvc后状态一直是pending
- pvc 报错
# kubectl describe pvc test-claim
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 15s (x25 over 80s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator
- nfs-client-provisioner报错
kubectl logs nfs-client-provisioner-5ff56b5cfc-fqnzv
E0205 02:12:39.764761 1 controller.go:756] Unexpected error getting claim reference to claim "ingress-nginx/test-claim": selfLink was empty, can't make reference
- 通过第二步的报错,查到1.20版本默认禁止使用selfLink。
- Stop propagating SelfLink (deprecated in 1.16) in kube-apiserver (#94397, @wojtek-t) [SIG API Machinery and Testing]
解决办法:
/etc/kubernetes/manifests/kube-apiserver.yaml 添加这段" - --feature-gates=RemoveSelfLink=false "
...
- command:
- kube-apiserver
- --advertise-address=192.168.5.11
- --allow-privileged=true
- --feature-gates=RemoveSelfLink=false
...
保存退出,kubernetes会自动重建apiserver。