Pod亲和性、Pod反亲和性、污点与容忍度、API资源对象PV和PVC和本地存储

一、Pod亲和性

  pod亲和性的对象为Pod,目的是实现,新建Pod和目标Pod调度到一起,在同一个Node上。

示例:

1.部署一个nginx的pod

[root@aminglinux01 ~]# cat testpod01.yaml
apiVersion: v1
kind: Pod
metadata:
  name: testpod01
  labels:
    app: myapp01
    env: test1
spec:
  containers:
  - name: testpod01
    image: nginx:latest
[root@aminglinux01 ~]# 

查看:

[root@aminglinux01 ~]# kubectl get pod -owide
NAME                         READY   STATUS             RESTARTS          AGE     IP              NODE           NOMINATED NODE   READINESS GATES
testpod                      1/1     Running            0                 5d21h   10.18.206.236   aminglinux02   <none>           <none>
testpod01                    1/1     Running            0                 23s     10.18.68.166    aminglinux03   <none>           <none>

2.部署一个redis pod,pod亲和性要求满足app=myapp01

[root@aminglinux01 ~]# cat testpod02.yaml
apiVersion: v1
kind: Pod
metadata:
  name: testpod02
  labels:
    app: myapp02
    env: test2
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution: ##必须满足下面匹配规则
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - myapp01 ## app=myapp01, 上面的Pod是符合要求的
        topologyKey: "kubernetes.io/hostname"
  containers:
  - name: testpod02
    image: redis:7.2.5
[root@aminglinux01 ~]# 
[root@aminglinux01 ~]# kubectl apply -f testpod02.yaml 
pod/testpod02 created
[root@aminglinux01 ~]# kubectl get pod -owide
NAME                         READY   STATUS      RESTARTS        AGE     IP              NODE           NOMINATED NODE   READINESS GATES
testpod01                    1/1     Running     0               95m     10.18.68.166    aminglinux03   <none>           <none>
testpod02                    1/1     Running     0               23s     10.18.68.162    aminglinux03   <none>           <none>
[root@aminglinux01 ~]# kubectl get pod -owide
NAME                         READY   STATUS      RESTARTS        AGE     IP              NODE           NOMINATED NODE   READINESS GATES
testpod01                    1/1     Running     0               96m     app=myapp01,env=test1
testpod02                    1/1     Running     0               82s     app=myapp02,env=test2

二、Pod反亲和性

目的是实现,新建Pod和目标Pod不要调度到一
起,不在同一个Node上。
示例:

[root@aminglinux01 ~]# cat testpod03.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: testpod03
  labels:                        ##### Label是自定义的一些key/value对,你可以随心所欲的设置。
    app: myapp03
    env: test3
spec:
  containers:
  - name: testpod03
    image: nginx:latest
[root@aminglinux01 ~]# kubectl apply -f testpod03.yaml 
pod/testpod03 created
[root@aminglinux01 ~]# kubectl get pod testpod03 -owide
NAME        READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
testpod03   1/1     Running   0          55s   10.18.206.235   aminglinux02   <none>           <none>
[root@aminglinux01 ~]# 

新建一个pod,反亲和性条件:app=myapp03

[root@aminglinux01 ~]# cat testpod04.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: testpod04
  labels:
    app: myapp04
    env: test4
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution: ##必须满足下面匹配规则
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - myapp03 ## app=myapp03, 上面的Pod是符合要求的
        topologyKey: "kubernetes.io/hostname"
  containers:
  - name: testpod04
    image: registry.cn-hangzhou.aliyuncs.com/daliyused/redis:7.2.5
[root@aminglinux01 ~]# kubectl get pod testpod04 -owide
NAME        READY   STATUS    RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
testpod04   1/1     Running   0          99s   10.18.68.164   aminglinux03   <none>           <none>
[root@aminglinux01 ~]# kubectl get pod testpod03 -owide
NAME        READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
testpod03   1/1     Running   0          10m   10.18.206.235   aminglinux02   <none>           <none>
[root@aminglinux01 ~]# 
[root@aminglinux01 ~]# kubectl get pod testpod03 --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
testpod03   1/1     Running   0          12m   app=myapp03,env=test3
[root@aminglinux01 ~]# kubectl get pod testpod04 --show-labels
NAME        READY   STATUS    RESTARTS   AGE    LABELS
testpod04   1/1     Running   0          4m1s   app=myapp04,env=test4
[root@aminglinux01 ~]# 

三、污点与容忍度

通俗说法:

       node(衣服):我踏马的身上有污点(taint)

       pod(没有配置容忍度):我去,我有洁癖,那个node穿不了(不能被调度)。

       pod(配置容忍度):我擦,我忍了,脏我也穿(可以被调度)!

       pod(容忍度和污点不对):挖槽,这污点位置我忍不了(不能被调度)!

  污点(Taint)对节点来说,和节点亲和性正好相对,节点亲和性使Pod被吸引到一类特定的节点,而污点使节点能够排斥一类特定的Pod

  容忍度(Toleration)应用于Pod上,它用来允许调度器调度带有对应污点的节点容忍度允许调度但并不保证调度:作为其功能的一部分, 调度器也会评估其他参数。

  污点和容忍度(Toleration)相互配合,可以避免Pod被分配到不合适的节点上。 每个节点上都可以应用一个或多个污点,这表示对于那些不能容忍这些污点的Pod, 是不会被该节点接受的。

设置污点命令格式:

kubectl taint node [node] key=value:[effect]            ###其中[effect] 可取值:NoSchedule | PreferNoSchedule |NoExecute ]             ###taint中的key=value和toleration中的key=value有关联。

NoSchedule 一定不能被调度,已经在运行中的Pod不受影响。
PreferNoSchedule尽量不要调度,实在没有节点可调度再调度到此节点。
NoExecute:不仅不会调度还会驱逐Node上已有的Pod

清除污点命令格式:

kubectl taint node [node] key:[effect]-

示例:

kubectl taint node aminglinux02 name=aming:NoSchedule

[root@aminglinux01 ~]# kubectl taint node aminglinux02 name=aming:NoSchedule
node/aminglinux02 tainted
[root@aminglinux01 ~]# kubectl describe node aminglinux02  | grep Taint
Taints:             name=aming:NoSchedule
[root@aminglinux01 ~]# 

查看污点:

kubectl describe node aminglinux02 | grep Taints -A 10

[root@aminglinux01 ~]# kubectl describe node aminglinux02 | grep Taints -A 10
Taints:             name=aming:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  aminglinux02
  AcquireTime:     <unset>
  RenewTime:       Tue, 16 Jul 2024 01:07:26 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 15 Jul 2024 02:01:48 +0800   Mon, 15 Jul 2024 02:01:48 +0800   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Tue, 16 Jul 2024 01:07:10 +0800   Fri, 05 Jul 2024 03:00:29 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
[root@aminglinux01 ~]# 

设置容忍度的几种规则:
1)完全匹配

tolerations:
- key: "taintKey" #和污点的key名字保持一致
  operator: "Equal" #匹配类型,Equal表示匹配污点的所有值
value: "taintValue" #和污点key的值保持一致
effect: "NoSchedule" #污点类型

说明:
Pod 的 Toleration 声明中的key和effect需要与Taint的设置保持一致。
Operator如果设置为Equal,则key和value,要和Taint的设置保持一致。

2)不完全匹配

tolerations:
- key: "taintKey" #和污点的key名字保持一致
  operator: "Exists" #匹配类型,只要符合污点设置的key即可
  effect: "NoSchedule" #污点的类型

说明:
Operator如果设置为Exists,则不需要指定value,只看key名字

3)大范围匹配

tolerations:
- key: "taintKey" #和污点的key名字保持一致
  operator: "Exists"

说明:
如果不设置effect,则只需要看key名字即可,不管Taint里的effect设置为什么都会匹配到

4)匹配所有

tolerations:
- operator: "Exists"       ####operator的值为 Exists,这是无需指定value,operator的值为Equal并且value相等,如果不指定operator,则默认为Equal。

说明:
如果省略key和effect,则匹配所有Taint, 在k8s中的daemonsets资源默认情况下是容忍所有污点的。

驱逐延缓时间设置

tolerations:
- key: "key1"
  operator: "Equal"
  value: "value1"
  effect: "NoExecute"
  tolerationSeconds: 3600

说明:
如果这个Pod 正在运行,那么Pod还将继续在节点上运行3600秒,然后被驱逐。 如果在此之前上述污点被删除了,则Pod不会被驱逐。

完整Pod YAML示例:

[root@aminglinux01 ~]# cat toleration.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: ng
  labels:
    env: dev
spec:
  containers:
  - name: ng
    image: nginx:latest
  tolerations:
  - key: name
    operator: Exists            #匹配类型,只要符合污点设置的key即可
    effect: NoSchedule           #污点的类型:一定不能被调度
[root@aminglinux01 ~]# kubectl apply -f toleration.yaml 
pod/ng created
[root@aminglinux01 ~]# kubectl get pod -o wide
NAME                         READY   STATUS      RESTARTS        AGE     IP              NODE           NOMINATED NODE   READINESS GATES
ng                           1/1     Running     0               31s     10.18.68.168    aminglinux03   <none>           <none>

案例一:

aminglinux02和aminglinux03设置污点,pod不配置容忍度,可以看到,ng pod一直处于pending无法被调度到任何node上。

[root@aminglinux01 ~]# kubectl describe node aminglinux02  | grep Taint
Taints:             name=aming:NoSchedule
[root@aminglinux01 ~]# kubectl describe node aminglinux03  | grep Taint
Taints:             name=aming:NoSchedule
[root@aminglinux01 ~]# kubectl get pod ng -owide
NAME   READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
ng     0/1     Pending   0          19s   <none>   <none>   <none>           <none>
[root@aminglinux01 ~]# cat toleration.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: ng
  labels:
    env: dev
spec:
  containers:
  - name: ng
    image: nginx:latest
#  tolerations:
#  - key: name
#    operator: Exists            #匹配类型,只要符合污点设置的key即可
#    effect: NoSchedule           #污点的类型:一定不能被调度
[root@aminglinux01 ~]# 

案例二:

aminglinux02设置污点,aminglinux03不设置污点,pod不配置容忍度,可以看到,ng pod一直被调度到aminglinux03 node上。

[root@aminglinux01 ~]# kubectl taint node aminglinux03 name:NoSchedule-
node/aminglinux03 untainted
[root@aminglinux01 ~]# kubectl describe node aminglinux02  | grep Taint
Taints:             name=aming:NoSchedule
[root@aminglinux01 ~]# kubectl describe node aminglinux03  | grep Taint
Taints:             <none>
[root@aminglinux01 ~]# 
[root@aminglinux01 ~]# kubectl apply -f toleration.yaml 
pod/ng created
[root@aminglinux01 ~]# kubectl get pod ng -owide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
ng     1/1     Running   0          4s    10.18.68.167   aminglinux03   <none>           <none>
[root@aminglinux01 ~]# 

案例三:

aminglinux02不设置污点,aminglinux03设置污点,pod不配置容忍度,可以看到,ng pod一直被调度到aminglinux02 node上。

[root@aminglinux01 ~]# kubectl delete -f toleration.yaml 
pod "ng" deleted
[root@aminglinux01 ~]# kubectl taint node aminglinux02 name:NoSchedule-
node/aminglinux02 untainted
[root@aminglinux01 ~]# kubectl taint node aminglinux03 name=aming:NoSchedule
node/aminglinux03 tainted
[root@aminglinux01 ~]# kubectl describe node aminglinux02  | grep Taint
Taints:             <none>
[root@aminglinux01 ~]# kubectl describe node aminglinux03  | grep Taint
Taints:             name=aming:NoSchedule
[root@aminglinux01 ~]# kubectl apply -f toleration.yaml 
pod/ng created
[root@aminglinux01 ~]# kubectl get pod ng -owide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
ng     1/1     Running   0          6s    10.18.206.240   aminglinux02   <none>           <none>
[root@aminglinux01 ~]# 

案例四

aminglinux02不设置污点,aminglinux03设置污点,pod配置容忍度,可以看到,ng pod一直被调度到aminglinux03 node上。

[root@aminglinux01 ~]# cat toleration.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: ng
  labels:
    env: dev
spec:
  containers:
  - name: ng
    image: nginx:latest
  tolerations:
  - key: name
    operator: Exists            #匹配类型,只要符合污点设置的key即可
    effect: NoSchedule           #污点的类型:一定不能被调度
[root@aminglinux01 ~]# kubectl describe node aminglinux02  | grep Taint
Taints:             <none>
[root@aminglinux01 ~]# kubectl describe node aminglinux03  | grep Taint
Taints:             name=aming:NoSchedule
[root@aminglinux01 ~]# kubectl get pod ng -owide
Error from server (NotFound): pods "ng" not found
[root@aminglinux01 ~]# kubectl apply -f toleration.yaml 
pod/ng created
[root@aminglinux01 ~]# kubectl get pod ng -owide
NAME   READY   STATUS              RESTARTS   AGE   IP       NODE           NOMINATED NODE   READINESS GATES
ng     0/1     ContainerCreating   0          2s    <none>   aminglinux03   <none>           <none>
[root@aminglinux01 ~]# 

案例五:

aminglinux02不设置污点,aminglinux03设置新的污点,pod配置容忍度,可以看到,ng pod一直被调度到aminglinux02 node上。

[root@aminglinux01 ~]# kubectl taint node aminglinux03 name:NoSchedule-
node/aminglinux03 untainted
[root@aminglinux01 ~]# kubectl delete -f toleration.yaml 
pod "ng" deleted
[root@aminglinux01 ~]# kubectl taint node aminglinux03 yeyunyi=jiayou:NoSchedule
node/aminglinux03 tainted
[root@aminglinux01 ~]# kubectl describe node aminglinux02  | grep Taint
Taints:             <none>
[root@aminglinux01 ~]# kubectl describe node aminglinux03  | grep Taint
Taints:             yeyunyi=jiayou:NoSchedule
[root@aminglinux01 ~]# kubectl apply -f toleration.yaml 
pod/ng created
[root@aminglinux01 ~]# kubectl get pod ng -owide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
ng     1/1     Running   0          5s    10.18.206.238   aminglinux02   <none>           <none>
[root@aminglinux01 ~]# 

案例四和案例五可以看出:设置容忍度的pod即可以部署在有对应污点的node上,也可以部署在没有污点的node上。

四、API资源对象PV和PVC

静态PV一般场景:PV由管理员创建,连接到存储上,PVC由用户创建,关联PV,pod通过.spec.volumes[],指定类型为persistentVolumeClaim(PVC),pvc就会被挂载到pod上。

动态PV一般场景:先创建SC绑定存储,PVC指定SC的name即可。

存储持久化相关三个概念:

  • PersistentVolume(PV)

对具体存储资源描述,比如NFS、Ceph、GlusterFS等,通过PV可以访问到具体的存储资源

  •  PersistentVolumeClaim(PVC)

Pod想要使用具体的存储资源需要对接到PVC,PVC里会定义好Pod希望使用存储的属性,通过PVC再去申请合适的存储资源(PV),匹配到合适的资源后PVC和PV会进行绑定,它们两者是一一对应的;

  • StorageClass(SC)

PV可以手动创建,也可以自动创建,当PV需求量非常大时,如果靠手动创建PV就非常麻烦了,SC可以实现自动创建PV,并且会将PVC和PV绑定

SC会定义两部分内容:
pv的属性,比如存储类型、大小;
创建该PV需要用到的存储插件(provisioner),这个provisioner是实现自动创建PV
的关键

1)PV YAML示例:

vi testpv.yaml

[root@aminglinux01 ~]# cat testpv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: testpv
spec:
  storageClassName: test-storage    #定义存储类名称,PV和PVC中都会有该字段,目的是为了方便两者匹配绑定在一起
  accessModes:                      #定义该pv的访问权限模式
  - ReadWriteOnce
  capacity:                         #定义该存储的大小
    storage: 500Mi ##提供500Mi空间
  hostPath:                         #定义该存储访问路径,指本地的磁盘
    path: /tmp/testpv/
[root@aminglinux01 ~]# 

说明:
storageClassName: 定义存储类名称,PV和PVC中都会有该字段,目的是为了方便两者匹配绑定在一起
accessModes定义该pv的访问权限模式,有三种:

  • ReadWriteOnce:存储卷可读可写,但只能被一个节点上的 Pod 挂载;  简写为RWO
  • ReadOnlyMany:存储卷只读不可写,可以被任意节点上的 Pod 多次挂载;简写为ROX
  • ReadWriteMany:存储卷可读可写也可以被任意节点上的 Pod 多次挂载;简写为RWX

capacity 定义该存储大小。
hostPath 定义该存储访问路径,这里指的是本地的磁盘。

2)PVC YAML示例:

vi testpvc.yaml

[root@aminglinux01 ~]# cat testpvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: testpvc
spec:
  storageClassName: test-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi ##期望申请100Mi空间
[root@aminglinux01 ~]# 

应用pv和pvc的YAML

kubectl apply -f testpv.yaml -f testpvc.yaml

[root@aminglinux01 ~]# kubectl apply -f testpv.yaml -f testpvc.yaml
persistentvolume/testpv created
persistentvolumeclaim/testpvc unchanged
[root@aminglinux01 ~]# 

查看状态

kubectl get pv,pvc

[root@aminglinux01 ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS   REASON   AGE
pvc-402daec2-9527-4a53-a6cb-e1d18c98f3d4   500Mi      RWX            Delete           Bound    default/redis-pvc-redis-sts-0   nfs-client              6d23h
pvc-bb317d2c-ef72-47a0-a8e2-f7704f60096d   500Mi      RWX            Delete           Bound    default/redis-pvc-redis-sts-1   nfs-client              6d23h
testpv                                     500Mi      RWO            Retain           Bound    default/testpvc                 test-storage            2m52s
[root@aminglinux01 ~]# kubectl get pvc
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-pvc-redis-sts-0   Bound    pvc-402daec2-9527-4a53-a6cb-e1d18c98f3d4   500Mi      RWX            nfs-client     6d23h
redis-pvc-redis-sts-1   Bound    pvc-bb317d2c-ef72-47a0-a8e2-f7704f60096d   500Mi      RWX            nfs-client     6d23h
testpvc                 Bound    testpv                                     500Mi      RWO            test-storage   4m40s
[root@aminglinux01 ~]# 

实验:
将testpvc的期望100Mi改为1000Mi,查看PV的STATUS。由于pv只有500M存储空间,而pvc需求1000M,因而pv和pvc未进行绑定。

[root@aminglinux01 ~]# cat testpvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: testpvc
spec:
  storageClassName: test-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1000Mi ##期望申请1000Mi空间
[root@aminglinux01 ~]# 
[root@aminglinux01 ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                           STORAGECLASS   REASON   AGE
pvc-402daec2-9527-4a53-a6cb-e1d18c98f3d4   500Mi      RWX            Delete           Bound      default/redis-pvc-redis-sts-0   nfs-client              6d23h
pvc-bb317d2c-ef72-47a0-a8e2-f7704f60096d   500Mi      RWX            Delete           Bound      default/redis-pvc-redis-sts-1   nfs-client              6d23h
testpv                                     500Mi      RWO            Retain           Released   default/testpvc                 test-storage            22m
[root@aminglinux01 ~]# kubectl get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-pvc-redis-sts-0   Bound     pvc-402daec2-9527-4a53-a6cb-e1d18c98f3d4   500Mi      RWX            nfs-client     7d
redis-pvc-redis-sts-1   Bound     pvc-bb317d2c-ef72-47a0-a8e2-f7704f60096d   500Mi      RWX            nfs-client     6d23h
testpvc                 Pending                                                                        test-storage   26s
[root@aminglinux01 ~]# 


3)PV和PVC匹配规则

PV创建好后,会等待PVC与其进行绑定,PVC一旦找到合适的PV就会绑定。如果有多个
PV时,PVC又是如何匹配PV的呢?它有如下一些规则
访问模式和存储类匹配:Kubernetes会筛选出访问模式(accessModes)和存储类
(storageClassName)与PVC相匹配的PV
。如果没有匹配的PV,PVC将保持未绑定状
态。
资源大小:在满足访问模式和存储类匹配的PV中,Kubernetes会选择资源大小大于或等于PVC请求大小的PV
最佳匹配:在满足访问模式、存储类和资源大小的PV中,Kubernetes会选择资源大小最接近PVC请求大小的PV。如果有多个PV具有相同的资源大小,Kubernetes会选择其中一个进行绑定。
避免重复绑定一个PV在任何时候只能被一个PVC绑定。一旦PV被绑定到一个PVC,它将不再可用于其他PVC。

五、本地存储

上一个章节的PV YAML示例,就是本地存储。本地存储类型的PV是Kubernetes中一种比较特殊的持久化存储,它允许将节点上的本地磁盘或目录用作PV。与其他PV类型(例如NFS、Ceph或云存储)不同,本地存储类型的PV直接使用节点上的存储资源,因此具有更低的延迟和更高的性能。使用本地存储类型的PV时,需注意以下几个关键点

  • 节点特性:本地存储类型的PV与特定的节点绑定,因为它直接使用节点上的存储资源。        

nodeAffinity: ##定义节点亲和性
  required:
    nodeSelectorTerms:
    - matchExpressions:
      - key: kubernetes.io/hostname
        operator: In
        values:
        - node-name         

数据持久性:由于本地存储类型的PV与特定节点关联,当该节点发生故障时,存储在PV中的数据可能无法访问。因此,在使用本地存储类型的PV时,请确保采取适当的数据备份策略,以防止节点故障导致的数据丢失。

调度限制:Pod使用本地存储类型的Persistent Volume Claim(PVC)时,Kubernetes会尝试将Pod调度到关联PV的节点上。如果节点上的资源不足以运行Pod,Pod将无法启动。因此,在使用本地存储类型的PV时,请确保关联的节点有足够的资源来运行Pod

回收策略:当PVC被删除时,PV的回收策略将决定如何处理关联的本地存储。对于本地存储类型的PV,建议使用Retain或Delete回收策略Retain策略表示保留存储和数据,以便手动清理和管理;Delete策略表示删除存储和数据。需要注意的是,Recycle策略并不适用于本地存储类型的PV。

persistentVolumeReclaimPolicy: Retain

完整示例:

首先,确保在每个要使用本地存储的节点上创建一个本地目录。例如,在节点上创建/mnt/local-storage目录:

mkdir -p /mnt/local-storage

[root@aminglinux01 ~]# cat local-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
  labels:
    type: local
spec:
  storageClassName: local-storage
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce                        ###只允许一个pod调用
  persistentVolumeReclaimPolicy: Retain    ###pvc删除时,pv保留
  local:
    path: /mnt/local-storage
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname #这是内置的节点标签,表示节点的主机名
          operator: In
          values:
          - aminglinux02 #只有aminglinux02这个主机节点才满足要求
[root@aminglinux01 ~]# 

应用PV资源配置文件:
kubectl apply -f local-pv.yaml

再创建一个PVC资源配置文件,例如local-pvc.yaml:

[root@aminglinux01 ~]# cat local-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-pvc
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
[root@aminglinux01 ~]# 

应用PVC资源配置文件:
kubectl apply -f local-pvc.yaml      

最后,创建一个Pod资源配置文件,例如local-pod.yaml:

[root@aminglinux01 ~]# cat local-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: local-pod
spec:
  containers:
  - name: local-container
    image: nginx:latest
    volumeMounts:
    - name: local-storage
      mountPath: /data       ###pod容器中的路径
  volumes:
  - name: local-storage
    persistentVolumeClaim:
      claimName: local-pvc
[root@aminglinux01 ~]# 

应用Pod资源配置文件:

kubectl apply -f local-pod.yaml

现在,local-pod中的local-container已经挂载了本地存储。所有写入/data目录的数据都
将持久化在本地存储中。

[root@aminglinux01 ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                           STORAGECLASS    REASON   AGE
local-pv                                   5Gi        RWO            Retain           Bound      default/local-pvc               local-storage            7m26s
pvc-402daec2-9527-4a53-a6cb-e1d18c98f3d4   500Mi      RWX            Delete           Bound      default/redis-pvc-redis-sts-0   nfs-client               7d1h
pvc-bb317d2c-ef72-47a0-a8e2-f7704f60096d   500Mi      RWX            Delete           Bound      default/redis-pvc-redis-sts-1   nfs-client               7d1h
testpv                                     500Mi      RWO            Retain           Released   default/testpvc                 test-storage             108m
[root@aminglinux01 ~]# kubectl get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
local-pvc               Bound     local-pv                                   5Gi        RWO            local-storage   5m44s
redis-pvc-redis-sts-0   Bound     pvc-402daec2-9527-4a53-a6cb-e1d18c98f3d4   500Mi      RWX            nfs-client      7d1h
redis-pvc-redis-sts-1   Bound     pvc-bb317d2c-ef72-47a0-a8e2-f7704f60096d   500Mi      RWX            nfs-client      7d1h
testpvc                 Pending                                                                        test-storage    86m
[root@aminglinux01 ~]# 

[root@aminglinux01 ~]# kubectl get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
local-pvc               Bound     local-pv                                   5Gi        RWO            local-storage   5m44s
redis-pvc-redis-sts-0   Bound     pvc-402daec2-9527-4a53-a6cb-e1d18c98f3d4   500Mi      RWX            nfs-client      7d1h
redis-pvc-redis-sts-1   Bound     pvc-bb317d2c-ef72-47a0-a8e2-f7704f60096d   500Mi      RWX            nfs-client      7d1h
testpvc                 Pending                                                                        test-storage    86m
[root@aminglinux01 ~]# kubectl describe pod local-pod 
Name:             local-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             aminglinux02/192.168.100.152
Start Time:       Tue, 16 Jul 2024 05:37:53 +0800
Labels:           <none>
Annotations:      cni.projectcalico.org/containerID: 8cefb4100134b8e4e2d401cbb0814f54ed737ebf99673756b4246425e5b34c07
                  cni.projectcalico.org/podIP: 10.18.206.241/32
                  cni.projectcalico.org/podIPs: 10.18.206.241/32
Status:           Running
IP:               10.18.206.241
IPs:
  IP:  10.18.206.241
Containers:
  local-container:
    Container ID:   containerd://5ead7482e4a9496fec3a082d69d9230790c51d94bc8cfd41871905d6b7ac0ec6
    Image:          nginx:latest
    Image ID:       docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 16 Jul 2024 05:37:56 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data from local-storage (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-khcjt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  local-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  local-pvc
    ReadOnly:   false
  kube-api-access-khcjt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m44s  default-scheduler  Successfully assigned default/local-pod to aminglinux02
  Normal  Pulling    3m42s  kubelet            Pulling image "nginx:latest"
  Normal  Pulled     3m40s  kubelet            Successfully pulled image "nginx:latest" in 2.132145172s (2.132150411s including waiting)
  Normal  Created    3m40s  kubelet            Created container local-container
  Normal  Started    3m40s  kubelet            Started container local-container
[root@aminglinux01 ~]# 
 

 测试本地存储

[root@aminglinux02 ~]# mkdir -p /mnt/local-storage
[root@aminglinux02 ~]# echo 1111 > /mnt/local-storage/1.txt
[root@aminglinux02 ~]# cat /mnt/local-storage/1.txt 
1111
[root@aminglinux02 ~]# 
[root@aminglinux01 ~]# kubectl exec -it  local-pod -- cat /data/1.txt
1111
[root@aminglinux01 ~]# 

必须保证pod永远在同一个节点上,保证pod重启在同一个节点能够挂载本地存储,因而一定要做nodeAffinity。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值