K8S之Pod管理

1.Pod的配置管理-ConfigMap

ConfigMap一般为Pod提供以下环境:

        1.容器内的环境变量

        2.设置容器启动命令的启动参数

        3.以Volume的形式挂载为容器内部的文件或目录

2.pod生命周期和重启策略

重启策略:

        Always:默认策略,容器失效后自动重启        

        OnFailure :容器终止且退出码不为0时自动重启

        Never:从不重启

3.Pod的健康检查和服务可用性检查

1. LivenessProbe探针(存活探针):用于判断容器是否Running,如果检测到不健康,就会根据重启策略来做响应的处理,如果一个容器不包含LivenessProbe探针,默认返回值永远为success

2. ReadinessProbe探针(就绪探针):用于判断容器服务是否ready状态,也就是能否正常对外提供服务,例如nginx在进行reload的时候,就绪探针检查服务不ready,就会从endpoints将不可用podIP地址删除,

3. StartupProbe探针:用于判断容器是否启动完成。

配置方法:

1.ExecAction:在容器内部执行一个命令。如果命令返回值为0,则表明容器健康

ExecAction for LivenessProbe

#1.创建yaml文件
[root@k8s-master yaml]# cat nginx-health.yml 
apiVersion: v1
kind: Pod
metadata:
  name: ngx-health
spec:
  containers:
  - name: ngx-liveness
    image: nginx:latest
    command:
    - /bin/sh
    - -c
    - /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    livenessProbe:
      exec:
        command: [ "/bin/sh", "-c", "test", "-e", "/run/nginx.pid" ]
  restartPolicy: Always
#2.创建pod
[root@k8s-master yaml]# kubectl create -f nginx-health.yml 
pod/ngx-health created
#3.查看pod
[root@k8s-master yaml]# kubectl get po 
NAME                     READY   STATUS    RESTARTS   AGE
ngx-health               1/1     Running   1          103s
[root@k8s-master yaml]# kubectl describe po ngx-health 
Name:         ngx-health
Namespace:    default
Priority:     0
Node:         k8s-node2/192.168.200.143
Start Time:   Fri, 13 Oct 2023 10:26:10 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.2.4
Containers:
  ngx-liveness:
    Container ID:  docker://89be0a3d279356181c09b17c4fdb60ea5b15686e8d3009fcbec8a08a7ce4b239
    Image:         nginx:latest
    Image ID:      docker-pullable://nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    State:          Running
      Started:      Fri, 13 Oct 2023 10:28:13 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Fri, 13 Oct 2023 10:27:13 +0800
      Finished:     Fri, 13 Oct 2023 10:28:10 +0800
    Ready:          True
    Restart Count:  2
    Liveness:       exec [/bin/sh -c test -e /run/nginx.pid] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wz6sm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-wz6sm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wz6sm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                 From                Message
  ----     ------     ----                ----                -------
  Normal   Scheduled  2m9s                default-scheduler   Successfully assigned default/ngx-health to k8s-node2
  Warning  Unhealthy  39s (x6 over 119s)  kubelet, k8s-node2  Liveness probe failed:
  Normal   Killing    39s (x2 over 99s)   kubelet, k8s-node2  Container ngx-liveness failed liveness probe, will be restarted
  Normal   Pulling    9s (x3 over 2m5s)   kubelet, k8s-node2  Pulling image "nginx:latest"
  Normal   Pulled     6s (x3 over 2m1s)   kubelet, k8s-node2  Successfully pulled image "nginx:latest"
  Normal   Created    6s (x3 over 2m1s)   kubelet, k8s-node2  Created container ngx-liveness
  Normal   Started    6s (x3 over 2m1s)   kubelet, k8s-node2  Started container ngx-liveness

健康检查条件是检查nginx.pid文件是否存在,我们执行的命令是删除nginx.pid文件,故健康检查失败后会一直重启        

2. HTTPGetAction:通过容器的ip地址,端口号及路径调用HTTPGet方法,如果响应的状态码大于等于200且小于400,则认为容器健康

HTTPGetAction for LivenessProbe

[root@k8s-master yaml]# cat nginx-health.yml 
apiVersion: v1
kind: Pod
metadata:
  name: ngx-health
spec:
  containers:
  - name: ngx-liveness
    image: nginx:latest
    command:
    - /bin/sh
    - -c
    - /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    livenessProbe:
      httpGet:
        path: /index.html
        port: 80
        scheme: HTTP
  restartPolicy: Always

[root@k8s-master yaml]# kubectl create -f nginx-health.yml 
pod/ngx-health-readiness created

[root@k8s-master yaml]# kubectl get po 
NAME                     READY   STATUS    RESTARTS   AGE
ngx-health    1/1     Running   1          92s

[root@k8s-master yaml]# kubectl describe po ngx-health
Name:         ngx-health-readiness
Namespace:    default
Priority:     0
Node:         k8s-node2/192.168.200.143
Start Time:   Fri, 13 Oct 2023 10:52:48 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.2.5
Containers:
  ngx-liveness:
    Container ID:  docker://29e667402692c8ab8738d3e6a5b21afb0b54473698ec56d2bf00143b856199ff
    Image:         nginx:latest
    Image ID:      docker-pullable://nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    State:          Running
      Started:      Fri, 13 Oct 2023 10:52:53 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:80/index.html delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wz6sm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-wz6sm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wz6sm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From                Message
  ----     ------     ----               ----                -------
  Normal   Scheduled  65s                default-scheduler   Successfully assigned default/ngx-health-readiness to k8s-node2
  Normal   Pulled     61s                kubelet, k8s-node2  Successfully pulled image "nginx:latest"
  Normal   Created    61s                kubelet, k8s-node2  Created container ngx-liveness
  Normal   Started    60s                kubelet, k8s-node2  Started container ngx-liveness
  Warning  Unhealthy  33s (x3 over 53s)  kubelet, k8s-node2  Liveness probe failed: Get http://10.244.2.5:80/index.html: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   Killing    33s                kubelet, k8s-node2  Container ngx-liveness failed liveness probe, will be restarted
  Normal   Pulling    3s (x2 over 64s)   kubelet, k8s-node2  Pulling image "nginx:latest"

3.TCPSocketAction:通过容器的IP地址和端口号执行TCP检查,如果能建立TCP连接,则说明容器健康

TCPSocketAction for LivenessProbe

[root@k8s-master yaml]# cat nginx-health-tcpsocket.yml 
apiVersion: v1
kind: Pod
metadata:
  name: ngx-health
spec:
  containers:
  - name: ngx-liveness
    image: nginx:latest
    command:
    - /bin/sh
    - -c
    - /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    livenessProbe:
      tcpSocket:
        port: 80
  restartPolicy: Always

[root@k8s-master yaml]# kubectl create -f nginx-health-tcpsocket.yml 
pod/ngx-health created

[root@k8s-master yaml]# kubectl get po 
NAME                     READY   STATUS    RESTARTS   AGE
myapp-675bffcbcd-8bqd9   1/1     Running   0          19m
myapp-675bffcbcd-nnt8x   1/1     Running   0          26h
ngx-health               1/1     Running   1          69s

[root@k8s-master yaml]# kubectl delete po ngx-health-readiness 
pod "ngx-health-readiness" deleted
[root@k8s-master yaml]# cat nginx-health-tcpsocket.yml 
apiVersion: v1
kind: Pod
metadata:
  name: ngx-health
spec:
  containers:
  - name: ngx-liveness
    image: nginx:latest
    command:
    - /bin/sh
    - -c
    - /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    livenessProbe:
      tcpSocket:
        port: 80
  restartPolicy: Always
[root@k8s-master yaml]# kubectl create -f nginx-health-tcpsocket.yml 
pod/ngx-health created
[root@k8s-master yaml]# kubectl get po 
NAME                     READY   STATUS    RESTARTS   AGE
myapp-675bffcbcd-8bqd9   1/1     Running   0          19m
myapp-675bffcbcd-nnt8x   1/1     Running   0          26h
ngx-health               1/1     Running   0          51s
You have new mail in /var/spool/mail/root
[root@k8s-master yaml]# kubectl describe po ngx-health 
Name:         ngx-health
Namespace:    default
Priority:     0
Node:         k8s-node2/192.168.200.143
Start Time:   Fri, 13 Oct 2023 11:00:26 +0800
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.2.6
Containers:
  ngx-liveness:
    Container ID:  docker://49c30734f99b30458456c6fe90312266fc69c200edfaee1ab15016f96869bc6a
    Image:         nginx:latest
    Image ID:      docker-pullable://nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    State:          Running
      Started:      Fri, 13 Oct 2023 11:00:30 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       tcp-socket :80 delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wz6sm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-wz6sm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wz6sm
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From                Message
  ----     ------     ----               ----                -------
  Normal   Scheduled  62s                default-scheduler   Successfully assigned default/ngx-health to k8s-node2
  Normal   Pulled     58s                kubelet, k8s-node2  Successfully pulled image "nginx:latest"
  Normal   Created    58s                kubelet, k8s-node2  Created container ngx-liveness
  Normal   Started    58s                kubelet, k8s-node2  Started container ngx-liveness
  Warning  Unhealthy  30s (x3 over 50s)  kubelet, k8s-node2  Liveness probe failed: dial tcp 10.244.2.6:80: i/o timeout
  Normal   Killing    30s                kubelet, k8s-node2  Container ngx-liveness failed liveness probe, will be restarted
  Normal   Pulling    0s (x2 over 61s)   kubelet, k8s-node2  Pulling image "nginx:latest"

注:startupProbe 和 livenessProbe 最大的区别就是startupProbe在探测成功之后就不会继续探测了,而livenessProbe在pod的生命周期中一直在探测

4.pod调度

4.1 Deployment或RC:全自动调度

Deployment或RC的主要功能之一就是自动部署一个容器应用的多个副本,以及持续监控副本的数量,在集群内始终维持用户指定的副本数量

[root@k8s-master yaml]# cat myapp-nginx.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx 
        ports:
        - containerPort: 80
          name: http 
[root@k8s-master yaml]# kubectl get po 
NAME                     READY   STATUS             RESTARTS   AGE
myapp-675bffcbcd-8bqd9   1/1     Running            0          31m
myapp-675bffcbcd-nnt8x   1/1     Running            0          26h
[root@k8s-master yaml]# kubectl get rs
NAME               DESIRED   CURRENT   READY   AGE
myapp-675bffcbcd   2         2         2       42h
[root@k8s-master yaml]# kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
myapp   2/2     2            2           42h

4.2 NodeSelector:定向调度

K8S master上的Scheduler(kube-scheduler)负责实现pod的调度

1.我们先跟node1打上标签

 kubectl label nodes k8s-node1 disk=ssd

[root@k8s-master ~]# kubectl label nodes k8s-node1 disk=ssd
node/k8s-node1 labeled
[root@k8s-master ~]# kubectl get nodes --show-labels 
NAME         STATUS     ROLES    AGE     VERSION   LABELS
k8s-master   Ready      master   5d22h   v1.15.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1    NotReady   <none>   5d20h   v1.15.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    NotReady   <none>   23h     v1.15.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux

可以看到节点标签里面有个ssd了 

我们再去deployment中添加上nodeSelector标签 

[root@k8s-master yaml]# cat nginx-nodeselector.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-nginx
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx 
        ports:
        - containerPort: 80
          name: http 
      nodeSelector:
        disk: ssd
[root@k8s-master yaml]# kubectl apply -f nginx-nodeselector.yml 
deployment.apps/myapp-nginx created
[root@k8s-master yaml]# kubectl get po  -o wide
NAME                           READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
myapp-nginx-7c4f6ddcb4-6s5nr   1/1     Running   0          5m52s   10.244.1.13   k8s-node1   <none>           <none>

发现pod被分配到了node1上

4.3 NodeAffinity:Node亲和性调度

RequiredDuringSchedulingIgnoredDuringExecution:必须满足指定的规则才可以调度到node上

preferredDuringSchedulingIgnoredDuringExecution:强调优先满足指定规则,调度器会尝试调度Pod到Node上,但并不强求,相当于软限制。多个优先级规则还可以设置权重(weight)值,以定义执行的先后顺序

IgnoredDuringExecution:如果一个Pod所在的节点在Pod运行期间标签发生了变更,不再符合该Pod的节点亲和性需求,则系统将忽略Node上Label的变化,该Pod能继续在该节点运行

[root@k8s-master yaml]# cat nginx-affinity.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-nginx
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx 
        ports:
        - containerPort: 80
          name: http 
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:  #要求只能运行在amd64上
            nodeSelectorTerms:
            - matchExpressions:
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - amd64
          preferredDuringSchedulingIgnoredDuringExecution:  #要求是尽量运行在磁盘类型为ssd
          - weight: 1
            preference:
              matchExpressions:
              - key: disk
                operator: In
                values:
                - ssd

[root@k8s-master yaml]# kubectl create -f nginx-affinity.yaml 
deployment.apps/myapp-nginx created

[root@k8s-master yaml]# kubectl get po -o wide
NAME                           READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
myapp-675bffcbcd-8bqd9         1/1     Running   1          3h10m   10.244.1.10   k8s-node1   <none>           <none>
myapp-675bffcbcd-nnt8x         1/1     Running   1          29h     10.244.1.11   k8s-node1   <none>           <none>
myapp-nginx-6997f77b88-dw9gg   1/1     Running   0          7s      10.244.1.15   k8s-node1   <none>           <none>

 NodeAffinity规则设置的注意事项如下:

1.如果同时定义了nodeSelector和nodeAffinity,那么必须两个条件都得到满足,Pod才能最终运行在指定的Node上

2.如果nodeAffinity指定了多个nodeSelectorTerms,那么其中一个能够匹配成功即可

3.如果在nodeSelectorTerms中有多个matchExpressions,则一个节点必须满足所有matchExpressions才能运行该Pod

4.4 PodAffinity:pod亲和性

如果希望2个pod在一个node上,可以使用pod亲和性

如果希望2个占用cpu高的pod不在同一个节点,可以利用pod反亲和性

pod亲和性主要解决pod可以和哪些pod部署在同一个拓扑域当中

我们先创建一个pod-flag,里面有2个标签disk=ssd,app=nginx

[root@k8s-master ~]# kubectl get node --show-labels 
NAME         STATUS   ROLES    AGE     VERSION   LABELS
k8s-master   Ready    master   8d      v1.15.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1    Ready    <none>   8d      v1.15.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    Ready    <none>   3d19h   v1.15.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
[root@k8s-master ~]# cat pod-flag.yml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-flad
  labels:
    disk: "ssd"
    app: "nginx"
spec:
  containers:
  - name: nginx
    image: nginx
[root@k8s-master ~]# kubectl apply -f pod-flag.yml 
pod/pod-flad created
[root@k8s-master ~]# kubectl get po --show-labels 
NAME       READY   STATUS    RESTARTS   AGE     LABELS
pod-flad   1/1     Running   0          4m47s   app=nginx,disk=ssd

在创建一个pod-affinity

[root@k8s-master ~]# cat pod-affinity.yml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - nginx
        topologyKey: kubernetes.io/hostname
  containers:
  - name: with-pod-affinity
    image: registry.aliyuncs.com/google_containers/pause:3.1
[root@k8s-master ~]# vim pod-affinity.yml
[root@k8s-master ~]# kubectl apply -f pod-affinity.yml 
pod/pod-affinity created
[root@k8s-master ~]# kubectl get po -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
pod-affinity   1/1     Running   0          5s      10.244.2.8   k8s-node2   <none>           <none>
pod-flad       1/1     Running   0          8m48s   10.244.2.7   k8s-node2   <none>           <none>

发现此刻因为有app=nginx标签,也分配到了node2上

我们在创建一个anti-affinity

[root@k8s-master ~]# cat pod-anti-affinity.yml 
apiVersion: v1
kind: Pod
metadata:
  name: anti-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: disk
            operator: In
            values:
            - ssd
        topologyKey: failure-domain.beta.kubernetes.io/zone
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
              - nginx
        topologyKey: kubernetes.io/hostname
  containers:
  - name: anti-affinity
    image: registry.aliyuncs.com/google_containers/pause:3.1
[root@k8s-master ~]# kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
anti-affinity            0/1     Pending   0          33s     <none>        <none>      <none>           <none>
myapp-675bffcbcd-7gv8k   1/1     Running   0          3m15s   10.244.2.9    k8s-node2   <none>           <none>
myapp-675bffcbcd-kx8pf   1/1     Running   0          3m15s   10.244.1.20   k8s-node1   <none>           <none>
pod-affinity             1/1     Running   0          14m     10.244.2.8    k8s-node2   <none>           <none>
pod-flad                 1/1     Running   0          22m     10.244.2.7    k8s-node2   <none>           <none>

可以看到是pending状态,我们在node1上的pod打个disk=ssd的标签,在重新观察

[root@k8s-master ~]# kubectl label po myapp-675bffcbcd-kx8pf disk=ssd
pod/myapp-675bffcbcd-7gv8k labeled
[root@k8s-master ~]# kubectl get po -o wide --show-labels 
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES   LABELS
anti-affinity            0/1     Pending   0          98s     <none>        <none>      <none>           <none>            <none>
myapp-675bffcbcd-7gv8k   1/1     Running   0          4m20s   10.244.2.9    k8s-node2   <none>           <none>            app=myapp,disk=ssd,pod-template-hash=675bffcbcd
myapp-675bffcbcd-kx8pf   1/1     Running   0          4m20s   10.244.1.20   k8s-node1   <none>           <none>            app=myapp,pod-template-hash=675bffcbcd
pod-affinity             1/1     Running   0          15m     10.244.2.8    k8s-node2   <none>           <none>            <none>
pod-flad                 1/1     Running   0          24m     10.244.2.7    k8s-node2   <none>           <none>            app=nginx,disk=ssd
[root@k8s-master ~]# kubectl get po  -o wide --show-labels 
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES   LABELS
anti-affinity            1/1     Running   0          27s   10.244.1.22   k8s-node1   <none>           <none>            <none>
myapp-675bffcbcd-7gv8k   1/1     Running   0          12m   10.244.2.9    k8s-node2   <none>           <none>            app=myapp,disk=ssd,pod-template-hash=675bffcbcd
myapp-675bffcbcd-kx8pf   1/1     Running   0          12m   10.244.1.20   k8s-node1   <none>           <none>            app=myapp,disk=ssd,pod-template-hash=675bffcbcd
pod-affinity             1/1     Running   0          23m   10.244.2.8    k8s-node2   <none>           <none>            <none>
pod-flad                 1/1     Running   0          32m   10.244.2.7    k8s-node2   <none>           <none>            app=nginx,disk=ssd

此时会调度到node1上

总结:pod的亲和性是根据pod标签来的,在topologyKey区域内,匹配到的pod就安置到同一节点,互斥匹配到就安置到其他的节点,如果没有匹配到的pod即处于pending

4.5 Taints和Tolerations:污点和容忍

NoSchedule:node添加这个effect类型污点,新的不能容忍的pod,不能再调度过来,但是老的运行在node上不受影响
NoExecute:node添加这个effect类型污点,新的不能容忍的pod,不能调度过来,老的pod也会被驱逐
PreferNoSchedule:pod会尝试将pod分配到该节点

[root@k8s-master ~]# kubectl taint node k8s-node1 test=taints:NoSchedule
node/k8s-node1 tainted
[root@k8s-master ~]# kubectl describe node k8s-node1 |grep Taints
Taints:             test=taints:NoSchedule
#启动2个pod
[root@k8s-master ~]# kubectl get deployments
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
myapp         0/0     0            0           4d17h
myapp-nginx   0/0     0            0           2d21h
[root@k8s-master ~]# kubectl scale deployment myapp --replicas=2
deployment.extensions/myapp scaled
[root@k8s-master ~]# kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
myapp-675bffcbcd-2x8r2   1/1     Running   0          10s   10.244.2.11   k8s-node2   <none>           <none>
myapp-675bffcbcd-w9g5d   1/1     Running   0          10s   10.244.2.10   k8s-node2   <none>           <none>

发现都被调度到node2上去了,我们在添加一个容忍规则:

[root@k8s-master ~]# kubectl get deployments myapp -o yaml|grep -C 3 tolerations
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: test
        operator: Exists
[root@k8s-master ~]# kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
myapp-6b799c88f5-hsmhh   1/1     Running   0          3m40s   10.244.2.12   k8s-node2   <none>           <none>
myapp-6b799c88f5-hvglp   1/1     Running   0          3m45s   10.244.1.23   k8s-node1   <none>           <none>

发现有一个pod迁移到node1上去了

取消污点:

[root@k8s-master ~]# kubectl taint node k8s-node1 test-
node/k8s-node1 untainted
[root@k8s-master ~]# kubectl describe nodes k8s-node1 |grep  Taint
Taints:             <none>

4.6 Pod Priority Preemption:Pod优先级调度

资源不够时,同一节点,会驱逐低级别,优先启动高级别pod

[root@k8s-master yaml]# cat priorityclass.yml 
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
  name: high-priority
value: 1000000
globalDefault: false
description: "This priority class should be user for XYZ servuce pods only."
[root@k8s-master yaml]# kubectl apply -f priorityclass.yml 
priorityclass.scheduling.k8s.io/high-priority created
[root@k8s-master yaml]# kubectl get priorityclasses
NAME                      VALUE        GLOBAL-DEFAULT   AGE
high-priority             1000000      false            38s
system-cluster-critical   2000000000   false            8d
system-node-critical      2000001000   false            8d

创建了高优先级策略,value对应的值越大,优先级越高,超过1亿的被系统保留,用于指派给系统组件

在pod的yaml里面这样引用:

[root@k8s-master yaml]# cat myapp-nginx.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx 
        ports:
        - containerPort: 80
          name: http 
      priorityClassName: high-priority


 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值