《Kubernetes进阶实战》第七章《存储卷与数据持久化》

容器的存储卷

Pod是自己有生命周期的,Pod消失后数据也会消失,所以我们要把数据放在一个容器的外面
docker存储卷在k8s上只有一定的存储性,因为kubernetes是调度的,Pod挂掉之后再启动不会默认之前的数据位置,所以脱离节点的存储设备才可以解决持久能力
在kubernetes上Pod删除,存储卷也会随之而删除的,这一点区分docker
emptyDir 空目录
hostPath 主机目录

分布式存储:
glusterfs,rbd,cephfs,云存储(EBS,等)
查看K8s支持多少种存储:
kubectl explain pods.spec.volumes

[root@master ~]# kubectl explain pods.spec.volumes
KIND:     Pod
VERSION:  v1

RESOURCE: volumes <[]Object>

DESCRIPTION:
     List of volumes that can be mounted by containers belonging to the pod.
     More info: https://kubernetes.io/docs/concepts/storage/volumes

     Volume represents a named volume in a pod that may be accessed by any
     container in the pod.

FIELDS:
   awsElasticBlockStore	<Object>
     AWSElasticBlockStore represents an AWS Disk resource that is attached to a
     kubelet's host machine and then exposed to the pod. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

   azureDisk	<Object>
     AzureDisk represents an Azure Data Disk mount on the host and bind mount to
     the pod.

   azureFile	<Object>
     AzureFile represents an Azure File Service mount on the host and bind mount
     to the pod.

   cephfs	<Object>
     CephFS represents a Ceph FS mount on the host that shares a pod's lifetime

   cinder	<Object>
     Cinder represents a cinder volume attached and mounted on kubelets host
     machine More info:
     https://releases.k8s.io/HEAD/examples/mysql-cinder-pd/README.md

   configMap	<Object>
     ConfigMap represents a configMap that should populate this volume

   downwardAPI	<Object>
     DownwardAPI represents downward API about the pod that should populate this
     volume

   emptyDir	<Object>
     EmptyDir represents a temporary directory that shares a pod's lifetime.
     More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

   fc	<Object>
     FC represents a Fibre Channel resource that is attached to a kubelet's host
     machine and then exposed to the pod.

   flexVolume	<Object>
     FlexVolume represents a generic volume resource that is
     provisioned/attached using an exec based plugin.

   flocker	<Object>
     Flocker represents a Flocker volume attached to a kubelet's host machine.
     This depends on the Flocker control service being running

   gcePersistentDisk	<Object>
     GCEPersistentDisk represents a GCE Disk resource that is attached to a
     kubelet's host machine and then exposed to the pod. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

   gitRepo	<Object>
     GitRepo represents a git repository at a particular revision. DEPRECATED:
     GitRepo is deprecated. To provision a container with a git repo, mount an
     EmptyDir into an InitContainer that clones the repo using git, then mount
     the EmptyDir into the Pod's container.

   glusterfs	<Object>
     Glusterfs represents a Glusterfs mount on the host that shares a pod's
     lifetime. More info:
     https://releases.k8s.io/HEAD/examples/volumes/glusterfs/README.md

   hostPath	<Object>
     HostPath represents a pre-existing file or directory on the host machine
     that is directly exposed to the container. This is generally used for
     system agents or other privileged things that are allowed to see the host
     machine. Most containers will NOT need this. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

   iscsi	<Object>
     ISCSI represents an ISCSI Disk resource that is attached to a kubelet's
     host machine and then exposed to the pod. More info:
     https://releases.k8s.io/HEAD/examples/volumes/iscsi/README.md

   name	<string> -required-
     Volume's name. Must be a DNS_LABEL and unique within the pod. More info:
     https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names

   nfs	<Object>
     NFS represents an NFS mount on the host that shares a pod's lifetime More
     info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

   persistentVolumeClaim	<Object>
     PersistentVolumeClaimVolumeSource represents a reference to a
     PersistentVolumeClaim in the same namespace. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

   photonPersistentDisk	<Object>
     PhotonPersistentDisk represents a PhotonController persistent disk attached
     and mounted on kubelets host machine

   portworxVolume	<Object>
     PortworxVolume represents a portworx volume attached and mounted on
     kubelets host machine

   projected	<Object>
     Items for all in one resources secrets, configmaps, and downward API

   quobyte	<Object>
     Quobyte represents a Quobyte mount on the host that shares a pod's lifetime

   rbd	<Object>
     RBD represents a Rados Block Device mount on the host that shares a pod's
     lifetime. More info:
     https://releases.k8s.io/HEAD/examples/volumes/rbd/README.md

   scaleIO	<Object>
     ScaleIO represents a ScaleIO persistent volume attached and mounted on
     Kubernetes nodes.

   secret	<Object>
     Secret represents a secret that should populate this volume. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#secret

   storageos	<Object>
     StorageOS represents a StorageOS volume attached and mounted on Kubernetes
     nodes.

   vsphereVolume	<Object>
     VsphereVolume represents a vSphere volume attached and mounted on kubelets
     host machine

emptyDir存储卷演示 

[root@master ~]# mkdir volumes
[root@master ~]# cd volumes
[root@master volumes]# vim pod-volumes-demo.yaml
[root@master volumes]# cat pod-volumes-demo.yaml
#**************************************************************
#Author:                     linkun
#QQ:                         2*********0
#Date:                       2019-02-23
#FileName:                   pod-volumes-demo.yaml
#URL:                        https://blog.csdn.net/zisefeizhu
#Description:                 The test script
#Copyright (C):             2019 All rights reserved
#************************************************************
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    node01/create-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    - name: https
      containerPort: 443
    volumeMounts:
    - name: html
      mountPath: /data/web/html/
  - name: busybx
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /data/
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 7200"
  volumes:
  - name: html
    emptyDir: {}
[root@master volumes]# kubectl create -f pod-volumes-demo.yaml 
pod/pod-demo created

查看是否创建成功
[root@master volumes]# kubectl get pods 
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   2/2     Running   0          3m43s

进入busybox
[root@master volumes]# kubectl exec -it pod-demo -c busybox -- /bin/sh
/ # ls
bin   data  dev   etc   home  proc  root  sys   tmp   usr   var

查看挂载
/ # mount
rootfs on / type rootfs (rw)
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/EXBBVVKULATEPMYBGKNYK6RHN5:/var/lib/docker/overlay2/l/BLY3ANOGKZDCW3UFMRTHTTJ6KT,upperdir=/var/lib/docker/overlay2/0f7a6c0650bfccedd3c78ea03ec7922127bce079411943a4caf3ef3ed70f38e7/diff,workdir=/var/lib/docker/overlay2/0f7a6c0650bfccedd3c78ea03ec7922127bce079411943a4caf3ef3ed70f38e7/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/sda3 on /data type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sda3 on /dev/termination-log type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sda3 on /etc/resolv.conf type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sda3 on /etc/hostname type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sda3 on /etc/hosts type xfs (rw,relatime,attr2,inode64,noquota)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
tmpfs on /var/run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
proc on /proc/asound type proc (ro,relatime)
proc on /proc/bus type proc (ro,relatime)
proc on /proc/fs type proc (ro,relatime)
proc on /proc/irq type proc (ro,relatime)
proc on /proc/sys type proc (ro,relatime)
proc on /proc/sysrq-trigger type proc (ro,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/keys type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_stats type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/scsi type tmpfs (ro,relatime)
tmpfs on /sys/firmware type tmpfs (ro,relatime)

开始向目录写入文件(busybox)

/ # echo $(date) >> /data/index.html
/ # echo $(date) >> /data/index.html
/ # cat /data/index.html 
Sat Feb 23 08:38:36 UTC 2019
Sat Feb 23 08:38:37 UTC 2019

验证数据是否共享

进入myapp容器
[root@master volumes]# kubectl exec -it pod-demo -c myapp -- /bin/sh
/ # cat /data/web/html/index.html 
Sat Feb 23 08:38:36 UTC 2019
Sat Feb 23 08:38:37 UTC 2019
#发现数据是共享的

注:由此说明busyboy的/data和myapp的/data/web/html/是共享的

重新编辑后开始尝试:
第一个容器是开始对外面提供web服务为主容器第二个容器是对外面提供存储

删除原来的pods
[root@master volumes]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   2/2     Running   0          24m
[root@master volumes]# kubectl delete pods pod-demo
pod "pod-demo" deleted

在原来的基础上进行修改
[root@master volumes]# vim pod-volumes-demo.yaml 
[root@master volumes]# cat pod-volumes-demo.yaml 
#**************************************************************
#Author:                     linkun
#QQ:                         2********0
#Date:                       2019-02-23
#FileName:                   pod-volumes-demo.yaml
#URL:                        https://blog.csdn.net/zisefeizhu
#Description:                 The test script
#Copyright (C):             2019 All rights reserved
#************************************************************
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    node01/create-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    - name: https
      containerPort: 443
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/    修改1
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /data/
    command: ["bin/sh"]            修改2
    args: ["-c","while true;do echo $(date) >> /data/index.html;sleep 2;done"]
  volumes:
  - name: html
    emptyDir: {}

重新创建pod
[root@master volumes]# kubectl apply -f pod-volumes-demo.yaml 
pod/pod-demo created
[root@master volumes]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   2/2     Running   0          96s
[root@master volumes]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
pod-demo   2/2     Running   0          2m4s   10.244.2.2   node02   <none>           <none>

验证
[root@master volumes]# curl 10.244.2.2
Sat Feb 23 08:53:46 UTC 2019
Sat Feb 23 08:53:48 UTC 2019
Sat Feb 23 08:53:50 UTC 2019
Sat Feb 23 08:53:52 UTC 2019
Sat Feb 23 08:53:54 UTC 2019
Sat Feb 23 08:53:56 UTC 2019
Sat Feb 23 08:53:58 UTC 2019
Sat Feb 23 08:54:00 UTC 2019
Sat Feb 23 08:54:02 UTC 2019
Sat Feb 23 08:54:04 UTC 2019
Sat Feb 23 08:54:06 UTC 2019
Sat Feb 23 08:54:08 UTC 2019

注:每隔两秒生成一个新数据验证了同一个存储卷在同一个Pod不同的容器可以共同的调用,生命周期也随着Pod的消失而消失

温情提示:切记实验后,删除pod,因为此pod一直在生成数据。

hostPath存储卷演示

hostPath宿主机路径,就是把pod所在的宿主机之上的脱离pod中的容器名称空间的之外的宿主机的文件系统的某一目录和pod建立关联关系,在pod删除时,存储数据不会丢失。

[root@master volumes]# kubectl explain pods.spec.volumes.hostPath
KIND:     Pod
VERSION:  v1

RESOURCE: hostPath <Object>

DESCRIPTION:
     HostPath represents a pre-existing file or directory on the host machine
     that is directly exposed to the container. This is generally used for
     system agents or other privileged things that are allowed to see the host
     machine. Most containers will NOT need this. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

     Represents a host path mapped into a pod. Host path volumes do not support
     ownership management or SELinux relabeling.

FIELDS:
   path	<string> -required-
     Path of the directory on the host. If the path is a symlink, it will follow
     the link to the real path. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

   type	<string>
     Type for HostPath Volume Defaults to "" More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

注:

DirectoryOrCreate ( 宿主机上不存在创建此目录 ) Directory ( 必须存在挂载目录 )FileOrCreate (宿主机上不存在挂载文件就创建)    File ( 必须存在文件)

[root@master volumes]# vim pod-hostpath-vol.yaml
[root@master volumes]# cat pod-hostpath-vol.yaml 
#**************************************************************
#Author:                     linkun
#QQ:                         2********0
#Date:                       2019-02-23
#FileName:                   pod-hostpath-vol.yaml
#URL:                        https://blog.csdn.net/zisefeizhu
#Description:                 The test script
#Copyright (C):             2019 All rights reserved
#************************************************************
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-hostpath
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    hostPath:
      path: /data/pod/volume1    
      type: DirectoryOrCreate   #没有路径的话自动创建路径
[root@node01 ~]# mkdir -p /data/pod/volume1
[root@node01 ~]# vim /data/pod/volume1/index.html
[root@node01 ~]# cat /data/pod/volume1/index.html
zisefeizhu
[root@master volumes]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod-vol-hostpath   1/1     Running   0          72s   10.244.1.3   node01   <none>           <none>
[root@master volumes]# curl 10.244.1.3
zisefeizhu

注:如果node01节点挂了,数据还是会丢失的!

基于NFS的存储

注:NFS支持多客户端的读写
前提:

​1. 新建立一个主机
node3:10.0.0.103

2. 安装
yum -y install nfs-utils

3. 在node01和node02也安装nfs
yum install nfs-utils -y
确保每个节点安装nfs

4. 建立共享文件夹:
mkdir /data/volumes -pv

5. 设置共享:
vim /etc/exports
/data/volumes 10.0.0.0/24(rw,no_root_squash)
目录 授权给 网段 权限
注意10.0.0.0是node的网段

6. 启动NFS
systemctl start nfs

7. 查看2049端口是否打开
ss -tnl

实验nfs挂载

[root@node03 ~]# yum install -y nfs-utils
[root@node03 ~]# vim /etc/exports
[root@node03 ~]# cat /etc/exports
/data/volumes 10.0.0.0/24(rw,no_root_squash)
[root@node03 ~]# systemctl start nfs
[root@node03 ~]# showmount -e
Export list for node03:
/data/volumes 10.0.0.0/24

测试挂载
[root@node01 ~]#  mount -t nfs node03:/data/volumes /mnt     如果没有做hosts解析那写ip,建议做hosts解析
[root@node01 ~]# mount | grep node03
node03:/data/volumes on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.101,local_lock=none,addr=10.0.0.103)
[root@node01 ~]# umount /mnt/

配置清单
[root@master volumes]# vim pod-vol-nfs.yaml

#**************************************************************
#Author:                     linkun
#QQ:                         2********0
#Date:                       2019-02-23
#FileName:                   pod-vol-nfs.yaml
#URL:                        https://blog.csdn.net/zisefeizhu
#Description:                 The test script
#Copyright (C):             2019 All rights reserved
#************************************************************
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    nfs:
      path: /data/volumes
      server: node03
[root@master volumes]# kubectl apply -f pod-vol-nfs.yaml 
pod/pod-vol-nfs created
[root@master volumes]# kubectl get pods -o wide -w
NAME               READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod-vol-hostpath   1/1     Running   0          30m   10.244.1.3   node01   <none>           <none>
pod-vol-nfs        1/1     Running   0          12s   10.244.2.3   node02   <none>  

测试
[root@node03 ~]# cd /data/volumes/
[root@node03 volumes]# vim index.html
[root@node03 volumes]# cat index.html
zisefeizhu
[root@master volumes]# curl 10.244.2.3
zisefeizhu

注:这是一种利用NFS方式挂载到k8S内部的方式,pod挂掉后数据还在,适合做存储。
当然前提是每个节点都安装NFS!

PV和PVC

pv
[root@master volumes]# kubectl explain pv
KIND:     PersistentVolume
VERSION:  v1

DESCRIPTION:
     PersistentVolume (PV) is a storage resource provisioned by an
     administrator. It is analogous to a node. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes

FIELDS:
   apiVersion	<string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#resources

   kind	<string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds

   metadata	<Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata

   spec	<Object>
     Spec defines a specification of a persistent volume owned by the cluster.
     Provisioned by an administrator. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes

   status	<Object>
     Status represents the current information/status for the persistent volume.
     Populated by the system. Read-only. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes

pvc

[root@master volumes]# kubectl explain pvc
KIND:     PersistentVolumeClaim
VERSION:  v1

DESCRIPTION:
     PersistentVolumeClaim is a user's request for and claim to a persistent
     volume

FIELDS:
   apiVersion	<string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#resources

   kind	<string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds

   metadata	<Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata

   spec	<Object>
     Spec defines the desired characteristics of a volume requested by a pod
     author. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

   status	<Object>
     Status represents the current information/status of a persistent volume
     claim. Read-only. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

配置nfs存储

[root@node03 volumes]# mkdir v{1..5}
[root@node03 volumes]# ll
总用量 4
-rw-r--r-- 1 root root 11 2月  23 17:37 index.html
drwxr-xr-x 2 root root  6 2月  23 17:47 v1
drwxr-xr-x 2 root root  6 2月  23 17:47 v2
drwxr-xr-x 2 root root  6 2月  23 17:47 v3
drwxr-xr-x 2 root root  6 2月  23 17:47 v4
drwxr-xr-x 2 root root  6 2月  23 17:47 v5
[root@node03 volumes]# vim /etc/exports
[root@node03 volumes]# cat /etc/exports
/data/volumes/v1 10.0.0.0/24(rw,no_root_squash)
/data/volumes/v2 10.0.0.0/24(rw,no_root_squash)
/data/volumes/v3 10.0.0.0/24(rw,no_root_squash)
/data/volumes/v4 10.0.0.0/24(rw,no_root_squash)
/data/volumes/v5 10.0.0.0/24(rw,no_root_squash)
[root@node03 volumes]# exportfs -arv
exporting 10.0.0.0/24:/data/volumes/v5
exporting 10.0.0.0/24:/data/volumes/v4
exporting 10.0.0.0/24:/data/volumes/v3
exporting 10.0.0.0/24:/data/volumes/v2
exporting 10.0.0.0/24:/data/volumes/v1
[root@node03 volumes]# showmount -e
Export list for node03:
/data/volumes/v5 10.0.0.0/24
/data/volumes/v4 10.0.0.0/24
/data/volumes/v3 10.0.0.0/24
/data/volumes/v2 10.0.0.0/24
/data/volumes/v1 10.0.0.0/24

创建pv

[root@master volumes]# vim pvs-demo.yaml

因为重复的步骤很多,所以可以合理利用vim的快捷键如:14yy+p
[root@master volumes]# cat pvs-demo.yaml 
#**************************************************************
#Author:                     linkun
#QQ:                         2********0
#Date:                       2019-02-23
#FileName:                   pvs-demo.yaml
#URL:                        https://blog.csdn.net/zisefeizhu
#Description:                 The test script
#Copyright (C):             2019 All rights reserved
#************************************************************
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/volumes/v1
    server: node03
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 0.5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/volumes/v2
    server: node03
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/volumes/v3
    server: node03
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1.5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/volumes/v4
    server: node03
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/volumes/v5
    server: node03
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2.5Gi
---

查看是否已经存在pv
[root@master volumes]# kubectl get pv
No resources found.

创建pv
[root@master volumes]# kubectl apply -f pvs-demo.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created

查看pv
[root@master volumes]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   512Mi      RWO,RWX        Retain           Available                                   4s
pv002   1Gi        RWO,RWX        Retain           Available                                   4s
pv003   1536Mi     RWO,RWX        Retain           Available                                   4s
pv004   2Gi        RWO,RWX        Retain           Available                                   4s
pv005   2560Mi     RWO,RWX        Retain           Available                                   4s

开始定义pvc

[root@master volumes]# vim pod-vol-pvc.yaml
[root@master volumes]# cat pod-vol-pvc.yaml 
#**************************************************************
#Author:                     linkun
#QQ:                         2********0
#Date:                       2019-02-23
#FileName:                   pod-vol-pvc.yaml
#URL:                        https://blog.csdn.net/zisefeizhu
#Description:                 The test script
#Copyright (C):             2019 All rights reserved
#************************************************************
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-pvc
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: mypvc
root@master volumes]# kubectl apply -f pod-vol-pvc.yaml
persistentvolumeclaim/mypvc created
pod/pod-vol-pvc created
[root@master volumes]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   512Mi      RWO,RWX        Retain           Available                                           16m
pv002   1Gi        RWO,RWX        Retain           Bound       default/mypvc                           16m
pv003   1536Mi     RWO,RWX        Retain           Available                                           16m
pv004   2Gi        RWO,RWX        Retain           Available                                           16m
pv005   2560Mi     RWO,RWX        Retain           Available                                           16m
[root@master volumes]# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv002    1Gi        RWO,RWX                       3m51s

测试

查看pvc所在目录
[root@master volumes]# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv002    1Gi        RWO,RWX                       6m10s

nfs服务器:
[root@node03 volumes]# cd v2
[root@node03 v2]# echo "welcome to use pv2" > index.html

测试
[root@master volumes]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
pod-vol-hostpath   1/1     Running   0          69m     10.244.1.3   node01   <none>           <none>
pod-vol-nfs        1/1     Running   0          39m     10.244.2.3   node02   <none>           <none>
pod-vol-pvc        1/1     Running   0          3m29s   10.244.1.4   node01   <none>           <none>
[root@master volumes]# curl 10.244.1.4
welcome to use pv2

注:这里定义的pvc访问模式为多路读写。
说明mypvc已经绑定到了pv002上了
注意:如果定义的策略是return
将pv和pvc删除掉,数据也会存在目录上
一般情况下,我们只删除pv,而不会删除pvc
1.10版本之前能删除PVC的
1.10之后是不能删除PVC的
只要pv和pvc绑定,我们就不能删除pvc

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值