pv和pvc介绍
持久卷(PersistentVolume,PV)是集群中的一块存储,可以由管理员事先供应,或者 使用存储类(Storage Class)来动态供应。
持久卷申领(PersistentVolumeClaim,PVC)表达的是用户对存储的请求。概念上与 Pod 类似。 Pod 会耗用节点资源,而 PVC 申领会耗用 PV 资源。
pv访问模式有:
ReadWriteOnce
卷可以被一个节点以读写方式挂载。 ReadWriteOnce 访问模式也允许运行在同一节点上的多个 Pod 访问卷。
ReadOnlyMany
卷可以被多个节点以只读方式挂载。
ReadWriteMany
卷可以被多个节点以读写方式挂载。
ReadWriteOncePod
卷可以被单个 Pod 以读写方式挂载。
pv的3种回收策略:
保留:pvc被删除后,pv里任然保留pvc曾经的数据,需要手工去删除
回收:被废弃了,不使用这个策略了
删除: 删除的是什么? 删除动作会将 PersistentVolume 对象从 Kubernetes 中移除。
配置 Pod 以使用 PersistentVolume 作为存储
1.在master和所有的node节点上新建文件夹/mnt/data,同时在里面新建首页文件index.html
[root@k8smaster ~]# mkdir /mnt/data # ---》master上可以不新建,因为不会启动业务pod
[root@k8snode1 ~]# mkdir /mnt/data
[root@k8snode2 ~]# mkdir /mnt/data
[root@k8smaster data]# sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
[root@k8smaster data]# ls
index.html
[root@k8smaster data]# echo "sanchuang feng gao zhang liu shen" >>index.html
[root@k8smaster data]# cat index.html
Hello from Kubernetes storage
sanchuang feng gao zhang liu shen
[root@k8snode1 data]# sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
[root@k8snode2 data]# sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
其他的node服务器都查看
[root@k8snode1 ~]# cd /mnt/data/
[root@k8snode1 data]# ls
index.html
[root@k8snode2 ~]# cd /mnt/data/
[root@k8snode2 data]# ls
index.html
2.创建 PersistentVolume,创建pv-volume.yaml文件
[root@k8smaster pv]# cat pv-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual # ---》pvc通过这个存储类型的名字和pv绑定
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
也可以不自己编辑文件,可以去下载,使用下面的命令
kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
创建pv
[root@k8smaster pv]# kubectl apply -f pv-volume.yaml
persistentvolume/task-pv-volume created
[root@k8smaster pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Available manual 4s
3.创建 PersistentVolumeClaim--》pvc
注意:pv访问模式需要和pvc访问模式要一致,不然会导致pvc创建不成功,状态是pending
wget https://k8s.io/examples/pods/storage/pv-claim.yaml --no-check-certificate
[root@k8smaster pv]# cat pv-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
[root@k8smaster pv]# kubectl apply -f pv-claim.yaml
persistentvolumeclaim/task-pv-claim created
[root@k8smaster pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 4s
www-web-0 Pending my-storage-class 4d12h
4.创建 Pod使用pvc
wget https://k8s.io/examples/pods/storage/pv-pod.yaml --no-check-certificate
[root@k8smaster pv]# cat pv-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
[root@k8smaster pv]# kubectl apply -f pv-pod.yaml
pod/task-pv-pod created
[root@k8smaster pv]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod 1/1 Running 0 82m 10.244.249.26 k8snode1 <none> <none>
task-pv-pod 1/1 Running 0 2m49s 10.244.185.248 k8snode2 <none> <none>
5.验证pod是否使用pv里的数据
[root@k8smaster pv]# curl 10.244.185.248
Hello from Kubernetes storage
sanchuang feng gao zhang liu shen
[root@k8smaster pv]# kubectl exec -it task-pv-pod -- bash
root@task-pv-pod:/# cd /usr/share/nginx/html
root@task-pv-pod:/usr/share/nginx/html# ls
index.html
root@task-pv-pod:/usr/share/nginx/html# cat index.html
Hello from Kubernetes storage
sanchuang feng gao zhang liu shen
pv+pvc+nfs的实验
1.搭建好nfs服务器
建议k8s集群内的所有的节点都安装nfs-utils软件,因为节点服务器里创建卷需要支持nfs网络文件系统
[root@k8smaster pv]# yum install nfs-utils -y
[root@k8smaster pv]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service
[root@k8smaster pv]# ps aux |grep nfs
root 106106 0.3 1.3 355420 25912 pts/0 T 00:20 0:00 /usr/bin/python /usr/bin/yum install nfs-utils -y
root 107384 0.0 0.0 0 0 ? S< 00:21 0:00 [nfsd4_callbacks]
root 107390 0.0 0.0 0 0 ? S 00:21 0:00 [nfsd]
root 107391 0.0 0.0 0 0 ? S 00:21 0:00 [nfsd]
root 107392 0.0 0.0 0 0 ? S 00:21 0:00 [nfsd]
root 107393 0.0 0.0 0 0 ? S 00:21 0:00 [nfsd]
root 107394 0.0 0.0 0 0 ? S 00:21 0:00 [nfsd]
root 107395 0.0 0.0 0 0 ? S 00:21 0:00 [nfsd]
root 107396 0.0 0.0 0 0 ? S 00:21 0:00 [nfsd]
root 107397 0.0 0.0 0 0 ? S 00:21 0:00 [nfsd]
root 107447 0.0 0.0 112824 988 pts/0 S+ 00:21 0:00 grep --color=auto nfs
2.设置共享目录
[root@k8smaster pv]# vim /etc/exports
[root@k8smaster pv]# cat /etc/exports
/sc/web 192.168.102.0/24(rw,no_root_squash,sync)
3.新建共享目录和index.html网页
[root@k8smaster ~]# mkdir /sc/web -p
[root@k8smaster ~]# cd /sc/web
[root@k8smaster web]# echo "welcome to sanchuang" >>index.html
[root@k8smaster web]# ls
index.html
[root@k8smaster web]# cat index.html
welcome to sanchuang
4.刷新nfs或者重新输出共享目录
[root@k8smaster web]# exportfs -a # 输出所有共享目录
[root@k8smaster web]# exportfs -v # 显示输出的共享目录
/sc/web 192.168.102.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
[root@k8smaster web]# exportfs -r # 重新输出所有的共享目录
[root@k8smaster web]# exportfs -v
/sc/web 192.168.102.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
[root@k8smaster web]# service nfs restart # 重启nfs服务
Redirecting to /bin/systemctl restart nfs.service
[root@k8smaster web]# ll -d /sc/web
drwxr-xr-x 2 root root 24 4月 4 00:24 /sc/web
关闭nfs服务器上的防火墙服务并且设置开机不启动
[root@k8smaster web]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@k8smaster web]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8smaster web]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录
[root@k8snode1 ~]# mount 192.168.102.136:/sc/web /pv_pvc_nfs
mount.nfs: mount point /pv_pvc_nfs does not exist
您在 /var/spool/mail/root 中有新邮件
[root@k8snode1 ~]# mkdir /pv_pvc_nfs
[root@k8snode1 ~]# mount 192.168.102.136:/sc/web /pv_pvc_nfs
[root@k8snode1 ~]# df -Th|grep nfs
192.168.102.136:/sc/web nfs4 50G 5.8G 45G 12% /pv_pvc_nfs
5.创建pv使用nfs服务器上的共享目录
[root@k8smaster pv]# vim nfs-pv.yaml
[root@k8smaster pv]# cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: sc-nginx-pv
labels:
type: sc-nginx-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: nfs # pv对应的名字
nfs:
path: "/sc/web" # nfs共享的目录
server: 192.168.102.136 # nfs服务器的ip地址
readOnly: false # 访问模式
[root@k8smaster pv]# kubectl apply -f nfs-pv.yaml
persistentvolume/sc-nginx-pv created
[root@k8smaster pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
sc-nginx-pv 10Gi RWX Retain Available nfs 3s
task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 47m
6.创建pvc使用pv
[root@k8smaster pv]# vim pvc-nfs.yaml
[root@k8smaster pv]# cat pvc-nfs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sc-nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs #使用nfs类型的pv
[root@k8smaster pv]# kubectl apply -f pvc-nfs.yaml
persistentvolumeclaim/sc-nginx-pvc created
[root@k8smaster pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sc-nginx-pvc Bound sc-nginx-pv 10Gi RWX nfs 71s
task-pv-claim Bound task-pv-volume 10Gi RWO manual 45m
www-web-0 Pending my-storage-class 4d12h
7.创建pod使用pvc
[root@k8smaster pv]# vim nginx-deployment.yaml
[root@k8smaster pv]# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: sc-pv-storage-nfs
persistentVolumeClaim:
claimName: sc-nginx-pvc
containers:
- name: sc-pv-container-nfs
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: sc-pv-storage-nfs
[root@k8smaster pv]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@k8smaster pv]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5fc8c46f96-6rtpt 1/1 Running 0 50s 10.244.249.27 k8snode1 <none> <none>
nginx-deployment-5fc8c46f96-cf6wx 1/1 Running 0 50s 10.244.185.251 k8snode2 <none> <none>
nginx-deployment-5fc8c46f96-prj76 1/1 Running 0 50s 10.244.185.250 k8snode2 <none> <none>
8.测试访问
[root@k8snode1 ~]# curl 10.244.249.27
welcome to sanchuang
您在 /var/spool/mail/root 中有新邮件
[root@k8snode1 ~]# curl 10.244.185.251
welcome to sanchuang
[root@k8snode1 ~]# curl 10.244.185.250
welcome to sanchuang
修改内容
[root@k8smaster web]# echo "teacher feng nfs pv pvc" >> index.html
[root@k8smaster web]# cat index.html
welcome to sanchuang
teacher feng nfs pv pvc
再次访问
[root@k8snode1 ~]# curl 10.244.185.250
welcome to sanchuang
teacher feng nfs pv pvc