storageclass
一、storageclass自动创建pv
storageclass自动创建PV的操作步骤
Provisioner:提供者(存储提供者)
1.master和node节点开启nfs服务
[root@master ~]# mkdir /nfsdata
[root@master ~]# yum -y install nfs-utils
[root@master ~]# cat /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl enable nfs-server
[root@master ~]# showmount -e
Export list for master:
/nfsdata *
2.开启rbac权限
RBAC(基于角色的访问控制)
全拼:Role-Based Access Control
[root@master sc]# vim rbac.yaml
kind: Namespace
apiVersion: v1
metadata:
name: bdqn
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: bdqn
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
namespace: bdqn
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: bdqn
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
3.创建nfs-deployment.yaml
[root@master sc]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: bdqn
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: bdqn-test
- name: NFS_SERVER
value: 192.168.229.187
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.229.187
path: /nfsdata
注意:nfs-client-provisioner这个镜像的作用,它通过k8s集群内置的NFS驱动,挂载远端的NFS服务器到本地目录,然后将自身作为storage provisioner关联到storageclass。
4.创建storageclass资源
[root@master sc]# vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storageclass
namespace: bdqn
provisioner: bdqn-test
reclaimPolicy: Retain
5.创建pvc验证
[root@master sc]# vim test-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
namespace: bdqn
spec:
storageClassName: storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
[root@master sc]# kubectl get pvc -n bdqn
[root@master sc]# ls /nfsdata/
bdqn-test-test-pvc-pvc-f37d2720-4f95-480d-b5d8- 4baaa7490d1c
[root@master sc]# kubectl exec -n bdqn nfs-client-provisioner-9f5f7c889-9mpxg ls /persistentvolumes
bdqn-test-test-pvc-pvc-f37d2720-4f95-480d-b5d8- 4baaa7490d1c
6.创建一个Pod测试
[root@master sc]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: bdqn
spec:
containers:
- name: test-pod
image: busybox
args:
- /bin/sh
- -c
- sleep 3000
volumeMounts:
- name: nfs-pv
mountPath: /test
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: test-pvc
[root@master sc]# kubectl get pod -n bdqn
[root@master sc]# ls /nfsdata/
bdqn-test-test-pvc-pvc-f37d2720-4f95-480d-b5d8- 4baaa7490d1c
[root@master sc]# echo 123456 > /nfsdata/bdqn-test-test-pvc-pvc-f37d2720-4f95-480d-b5d8-4baaa7490d1c/test.txt
[root@master sc]# kubectl exec -n bdqn test-pod cat /test/test.txt
123456
二、statefulset自动创建pvc
RC、Deployment、DaemonSet都是面向无状态的服务,它们所管理的Pod的IP、名字,启停顺序等都是随机的,而StatefulSet是有状态的集合,管理所有有状态的服务,比如MySQL、MongoDB集群等。
StatefulSet本质上是Deployment的一种变体,在v1.9版本中已成为GA版本,它为了解决有状态服务的问题,它所管理的Pod拥有固定的Pod名称、启停顺序,在StatefulSet中,Pod名字称为网络标识(hostname),还必须要用到共享存储。
在Deployment中,与之对应的服务是service,而在StatefulSet中与之对应的是headless service即无头服务,与service的区别就是它没有Cluster IP,解析它的名称时将返回该Headless Service对应的全部Pod的Endpoint列表。
除此之外,StatefulSet在Headless Service的基础上又为StatefulSet控制的每个Pod副本创建了一个DNS域名,这个域名的格式为:
$(podname).(headless server name)
FQDN:$(podname).(headless server name).namespace.svc.cluster.local
示例:运行一个有状态的服务
[root@master yaml]# vim statefulSet.yaml
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- name: myweb
port: 80
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: myweb
image: nginx
什么是headless service无头服务?
在用Deployment时,每一个Pod名称是没有顺序的,是随机字符串,因此Pod名称是无序的,但是在statefulset中要求必须是有序,每一个pod不能被随意取代,pod重建后名称还是一样的。而pod IP是变化的,所以是以Pod名称来识别。pod名称是pod唯一性的标识符,必须持久稳定有效。这时候要用到无头服务,它可以给每个Pod一个唯一的名称。
什么是volumeClaimTemplate?
对于有状态的副本集都会用到持久存储,对于分布式系统来讲,它的最大特点是数据是不一样的,所以各个节点不能使用同一存储卷,每个节点有自已的专用存储,但是如果在Deployment中的Pod template里定义的存储卷,是所有副本集共用一个存储卷,数据是相同的,因为是基于模板来的,而statefulset中每个Pod都要自已的专有存储卷,所以statefulset的存储卷就不能再用Pod模板来创建了,于是statefulSet使用volumeClaimTemplate,称为卷申请模板,它会为每个Pod生成不同的pvc,并绑定pv,从而实现各pod有专用存储。
在上述yaml文件中,添加volumeClaimTemplate字段。
[root@master yaml]# vim statefulSet.yaml
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- name: myweb
port: 80
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: myweb
image: nginx
volumeMounts:
- name: test-storage
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: test-storage
annotations:
volume.beta.kubernetes.io/storage-class: storageclass
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
1.master和node节点开启nfs服务
[root@master ~]# mkdir /nfsdata
[root@master ~]# yum -y install nfs-utils
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl enable nfs-server
[root@master ~]# showmount -e
Export list for master:
/nfsdata *
2.开启rbac权限
[root@master yaml]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac.yaml
3.创建nfs-deployment.yaml
[root@master yaml]# vim nfs-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: bdqn-test
- name: NFS_SERVER
value: 192.168.229.187
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.229.187
path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deploy.yaml
4.创建storageclass资源
[root@master yaml]# vim storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storageclass
provisioner: bdqn-test
reclaimPolicy: Retain
[root@master yaml]# kubectl apply -f storage.yaml
5.运行statefulset.yaml文件
[root@master yaml]# kubectl apply -f statefulset.yaml
6.在此之前没有创建PV,PVC,查看之后发现自动生成了这两种资源
[root@master yaml]# kubectl get pv
[root@master yaml]# kubectl get pvc
我们知道storageclass为我们自动创建了PV,volumeClaimTemplate为我们自动创建PVC,但是否能够满足我们所说的,每一个Pod都有自己独有的数据持久化目录。也就是说,每一个Pod内的数据都是不一样的。
7.分别在对应的PV下,模拟创建不同的数据
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-86b7588dff-7s7vb 1/1 Running 0 7m42s
statefulset-0 1/1 Running 0 72s
statefulset-1 1/1 Running 0 53s
statefulset-2 1/1 Running 0 34s
[root@master yaml]# kubectl exec -it statefulset-0 bash
root@statefulset-0:~# echo 000 > /usr/share/nginx/html/index.html
root@statefulset-0:~# exit
[root@master yaml]# kubectl exec -it statefulset-1 bash
root@statefulset-1:/# echo 111 > /usr/share/nginx/html/index.html
root@statefulset-1:/# exit
[root@master yaml]# kubectl exec -it statefulset-2 bash
root@statefulset-2:/# echo 222 > /usr/share/nginx/html/index.html
root@statefulset-2:/# exit
8.查看对应Pod的数据持久化目录,看出每个Pod的内容都不一样
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-86b7588dff-7s7vb 1/1 Running 0 12m 10.244.1.2 node1 <none> <none>
statefulset-0 1/1 Running 0 6m21s 10.244.1.3 node1 <none> <none>
statefulset-1 1/1 Running 0 6m2s 10.244.2.2 node2 <none> <none>
statefulset-2 1/1 Running 0 5m43s 10.244.2.3 node2 <none> <none>
[root@master yaml]# curl 10.244.1.3
000
[root@master yaml]# curl 10.244.2.2
111
[root@master yaml]# curl 10.244.2.3
222
即使删除Pod,然后statefulSet这个Pod控制器会生成一个新的Pod。这里不看Pod的IP,名称肯定和之前的一致。最主要是持久化的数据仍然存在。
[root@master yaml]# kubectl delete pod statefulset-0
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-86b7588dff-7s7vb 1/1 Running 0 15m 10.244.1.2 node1 <none> <none>
statefulset-0 1/1 Running 0 17s 10.244.1.4 node1 <none> <none>
statefulset-1 1/1 Running 0 9m8s 10.244.2.2 node2 <none> <none>
statefulset-2 1/1 Running 0 8m49s 10.244.2.3 node2 <none> <none>
[root@master yaml]# curl 10.244.1.4
000