目录
Storageclass存储类动态生成存储
上面介绍的PV和PVC模式都是需要先创建好PV,然后定义好PVC和pv进行一对一的Bond,但是如果PVC请求成千上万,那么就需要创建成千上万的PV,对于运维人员来说维护成本很高,Kubernetes提供一种自动创建PV的机制,叫StorageClass,它的作用就是创建PV的模板。k8s集群管理员通过创建storageclass可以动态生成一个存储卷pv供k8s pvc使用。
StorageClass介绍
每个StorageClass都包含字段provisioner,parameters和reclaimPolicy
具体来说,StorageClass会定义以下两部分
1、PV的属性 ,比如存储的大小、类型等;
2、创建这种PV需要使用到的存储插件,比如Ceph、NFS等
有了这两部分信息,Kubernetes就能够根据用户提交的PVC,找到对应的StorageClass,然后Kubernetes就会调用 StorageClass声明的存储插件,创建出需要的PV。
StorageClass示例
资源列表
操作系统 | 主机名 | 配置 | IP |
CentOS7.9.2009 | master | 2C4G | 192.168.207.131 |
CentOS7.9.2009 | node1 | 2C4G | 192.168.207.165 |
CentOS7.9.2009 | node2 | 2C4G | 192.168.207.166 |
CentOS7.9.2009 | nfs | 2C4G | 192.168.207.167 |
准备NFS
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭selinux
setenforce 0
sed -i "s/.*SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
# 安装
yum -y install nfs-utils rpcbind
mkdir -p /data/volumes
cat > /etc/exports << EOF
/data/volumes 192.168.207.0/24(rw,no_root_squash)
EOF
systemctl enable nfs --now
# 所有kubernetes集群中的节点需要安装以下软件包用以支持nfs
yum -y install nfs-utils rpcbind
部署供应商
该yaml文件里面指定了NFS节点的地址,以及使用NFS节点的哪个目录。需要替换成自己的。
[root@master ~]# cat nfs-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nfs-provisioner-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-provisioner
spec:
selector:
matchLabels:
app: nfs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-provisioner
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: example.com/nfs # 通过这个名字可以调用这个供应商
- name: NFS_SERVER
value: 192.168.207.167
- name: NFS_PATH
value: /data/nfs_pro
volumes:
- name: nfs-client-root
nfs:
server: 192.168.207.167
path: /data/nfs_pro
[root@master ~]# kubectl apply -f nfs-provisioner.yaml
serviceaccount/nfs-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/nfs-provisioner-clusterrolebinding created
deployment.apps/nfs-provisioner created
创建StorageClass
[root@master ~]# cat StorageClass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: nfs
# 注意:provisioner处写的example.com/nfs应该跟安装nfs provisioner时候的env下的PROVISIONER_NAME的value值保持一致,如下
provisioner: example.com/nfs
allowVolumeExpansion: true # 允许动态扩容
[root@master ~]# kubectl apply -f StorageClass.yaml
storageclass.storage.k8s.io/nfs created
创建PVC
[root@master ~]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim1
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 1Gi
storageClassName: nfs
[root@master ~]# kubectl apply -f pvc.yaml
persistentvolumeclaim/test-claim1 created
[root@master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim1 Bound pvc-8309b2ce-7475-429b-bf4c-557fa10b2a68 1Gi RWX nfs 7s
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8309b2ce-7475-429b-bf4c-557fa10b2a68 1Gi RWX Delete Bound default/test-claim1 nfs 8s
Pod使用PVC
[root@master ~]# cat test-pvc-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pvc-pod
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox:1.28
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
volumeMounts:
- name: data
mountPath: /opt
volumes:
- name: data
persistentVolumeClaim:
claimName: test-claim1
[root@master ~]# kubectl apply -f test-pvc-pod.yaml
pod/test-pvc-pod created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-provisioner-5cb9c7bf59-zp2tx 1/1 Running 0 34m
test-pvc-pod 1/1 Running 0 34s