04-pod调度与pod控制器

1、pod调度

在默认情况下,一个Pod在哪个Node节点上运行,是由Scheduler组件采用相应的算法计算出来的,这个过程是不受人工控制的。但是在实际使用中,这并不满足的需求,因为很多情况下,我们想控制某些Pod到达某些节点上,那么应该怎么做呢?这就要求了解kubernetes对Pod的调度规则,kubernetes提供了四大类调度方式:

  • 自动调度:运行在哪个节点上完全由Scheduler经过一系列的算法计算得出
  • 定向调度:NodeName、NodeSelector
  • 亲和性调度:NodeAffinity、PodAffinity、PodAntiAffinity
  • 污点(容忍)调度:Taints、Toleration

1.1 定向调度

定向调度,指的是利用在pod上声明nodeName或者nodeSelector,以此将Pod调度到期望的node节点上。注意,这里的调度是强制的,这就意味着即使要调度的目标Node不存在,也会向上面进行调度,只不过pod运行失败而已。

NodeName

NodeName用于强制约束将Pod调度到指定的Name的Node节点上。这种方式,其实是直接跳过Scheduler的调度逻辑,直接将Pod调度到指定名称的节点。
接下来,实验一下:创建一个pod-nodename.yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: pod-nodename
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
  nodeName: k8snode1 # 指定调度到k8snode1节点上

查看:

#创建Pod
[root@k8smaster k8s]# kubectl create -f pod-nodename.yaml 
pod/pod-nodename created

#查看Pod调度到NODE属性,确实是调度到了k8snode1节点上
[root@k8smaster k8s]# kubectl get pods pod-nodename -n dev -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
pod-nodename   1/1     Running   0          50s   10.244.2.25   k8snode1   <none>           <none> 

# 接下来,删除pod,修改nodeName的值为k8snode3(并没有k8snode3节点)
[root@k8smaster k8s]# kubectl delete -f pod-nodename.yaml
pod "pod-nodename" deleted
[root@k8smaster k8s]# vim pod-nodename.yaml 
[root@k8smaster k8s]# kubectl create -f pod-nodename.yaml 
pod/pod-nodename created

#再次查看,发现已经向k8snode3节点调度,但是由于不存在k8snode3节点,所以pod无法正常运行
[root@k8smaster k8s]# kubectl get pods pod-nodename -n dev -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP       NODE        NOMINATED NODE   READINESS GATES
pod-nodename   0/1     Pending   0          36s   <none>   k8snode3   <none>           <none>

NodeSelector

NodeSelector用于将pod调度到添加了指定标签的node节点上。它是通过kubernetes的label-selector机制实现的,也就是说,在pod创建之前,会由scheduler使用MatchNodeSelector调度策略进行label匹配,找出目标node,然后将pod调度到目标节点,该匹配规则是强制约束。
接下来,实验一下:
1、 首先分别为node节点添加标签

[root@k8smaster k8s]# kubectl label nodes k8snode1 nodeenv=pro
node/k8snode1 labeled
[root@k8smaster k8s]# kubectl label nodes k8snode2 nodeenv=test
node/k8snode2 labeled

2、 创建一个pod-nodeselector.yaml文件,并使用它创建Pod

apiVersion: v1
kind: Pod
metadata:
  name: pod-nodeselector
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
  nodeSelector: 
    nodeenv: pro # 指定调度到具有nodeenv=pro标签的节点上

查看

#创建Pod
[root@k8smaster k8s]# kubectl create -f pod-nodeselector.yaml 
pod/pod-nodeselector created

#查看Pod调度到NODE属性,确实是调度到了k8snode1节点上
[root@k8smaster k8s]# kubectl get pods pod-nodeselector -n dev -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
pod-nodeselector   1/1     Running   0          48s   10.244.2.26   k8snode1   <none>           <none>

# 接下来,删除pod,修改nodeSelector的值为nodeenv: xxxx(不存在打有此标签的节点)
[root@k8smaster k8s]# kubectl delete -f pod-nodeselector.yaml 
pod "pod-nodeselector" deleted
[root@k8smaster k8s]# vim pod-nodeselector.yaml 
[root@k8smaster k8s]# kubectl create -f pod-nodeselector.yaml 
pod/pod-nodeselector created

#再次查看,发现pod无法正常运行,Node的值为none
[root@k8smaster k8s]# kubectl get pods pod-nodeselector -n dev -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
pod-nodeselector   0/1     Pending   0          41s   <none>   <none>   <none>           <none>

# 查看详情,发现node selector匹配失败的提示
[root@k8smaster k8s]# kubectl describe pods pod-nodeselector -n dev
.......
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.

1.2 亲和性调度

上一节,介绍了两种定向调度的方式,使用起来非常方便,但是也有一定的问题,那就是如果没有满足条件的Node,那么Pod将不会被运行,即使在集群中还有可用Node列表也不行,这就限制了它的使用场景。
基于上面的问题,kubernetes还提供了一种亲和性调度(Affinity)。它在NodeSelector的基础之上的进行了扩展,可以通过配置的形式,实现优先选择满足条件的Node进行调度,如果没有,也可以调度到不满足条件的节点上,使调度更加灵活。
Affinity主要分为三类:

  • nodeAffinity(node亲和性): 以node为目标,解决pod可以调度到哪些node的问题
  • podAffinity(pod亲和性) : 以pod为目标,解决pod可以和哪些已存在的pod部署在同一个拓扑域中的问题
  • podAntiAffinity(pod反亲和性) : 以pod为目标,解决pod不能和哪些已存在pod部署在同一个拓扑域中的问题

关于亲和性(反亲和性)使用场景的说明:
亲和性:如果两个应用频繁交互,那就有必要利用亲和性让两个应用的尽可能的靠近,这样可以减少因网络通信而带来的性能损耗。
反亲和性:当应用的采用多副本部署时,有必要采用反亲和性让各个应用实例打散分布在各个node上,这样可以提高服务的高可用性。

NodeAffinity

首先来看一下NodeAffinity的可配置项:

pod.spec.affinity.nodeAffinity
  requiredDuringSchedulingIgnoredDuringExecution  Node节点必须满足指定的所有规则才可以,相当于硬限制
    nodeSelectorTerms  节点选择列表
      matchFields   按节点字段列出的节点选择器要求列表
      matchExpressions   按节点标签列出的节点选择器要求列表(推荐)
        key    键
        values 值
        operator 关系符 支持Exists, DoesNotExist, In, NotIn, Gt, Lt
  preferredDuringSchedulingIgnoredDuringExecution 优先调度到满足指定的规则的Node,相当于软限制 (倾向)
    preference   一个节点选择器项,与相应的权重相关联
      matchFields   按节点字段列出的节点选择器要求列表
      matchExpressions   按节点标签列出的节点选择器要求列表(推荐)
        key    键
        values 值
        operator 关系符 支持In, NotIn, Exists, DoesNotExist, Gt, Lt
	weight 倾向权重,在范围1-100。
关系符的使用说明:

- matchExpressions:
  - key: nodeenv              # 匹配存在标签的key为nodeenv的节点
    operator: Exists
  - key: nodeenv              # 匹配标签的key为nodeenv,且value是"xxx"或"yyy"的节点
    operator: In
    values: ["xxx","yyy"]
  - key: nodeenv              # 匹配标签的key为nodeenv,且value大于"xxx"的节点
    operator: Gt
    values: "xxx"

接下来首先演示一下requiredDuringSchedulingIgnoredDuringExecution ,
创建pod-nodeaffinity-required.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-nodeaffinity-required
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
  affinity:  #亲和性设置
    nodeAffinity: #设置node亲和性
      requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
        nodeSelectorTerms:
        - matchExpressions: # 匹配env的值在["xxx","yyy"]中的标签
          - key: nodeenv
            operator: In
            values: ["xxx","yyy"]

演示:

# 查看节点存在的标签
[root@k8smaster opt]# kubectl get nodes --show-labels
NAME        STATUS   ROLES    AGE   VERSION   LABELS
k8smaster   Ready    master   8d    v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode1    Ready    <none>   8d    v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode1,kubernetes.io/os=linux,nodeenv=pro
k8snode2    Ready    <none>   8d    v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode2,kubernetes.io/os=linux,nodeenv=test

# 创建pod
[root@k8smaster k8s]# kubectl create -f pod-nodeaffinity-required.yaml 
pod/pod-nodeaffinity-required created

# 查看pod状态 (运行失败)
[root@k8smaster k8s]# kubectl get pods pod-nodeaffinity-required -n dev -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
pod-nodeaffinity-required   0/1     Pending   0          48s   <none>   <none>   <none>           <none>

# 查看Pod的详情
# 发现调度失败,提示node选择失败
[root@k8smaster k8s]# kubectl describe pod pod-nodeaffinity-required -n dev
......
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.

#接下来,停止pod
[root@k8smaster k8s]# kubectl delete -f pod-nodeaffinity-required.yaml
pod "pod-nodeaffinity-required" deleted

# 修改文件,将values: ["xxx","yyy"]------> ["pro","yyy"]
[root@k8smaster k8s]# vim pod-nodeaffinity-required.yaml

# 再次启动
[root@k8smaster k8s]# kubectl create -f pod-nodeaffinity-required.yaml 
pod/pod-nodeaffinity-required created

# 此时查看,发现调度成功,已经将pod调度到了node1上
[root@k8smaster k8s]# kubectl get pods pod-nodeaffinity-required -n dev -o wide
NAME                        READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
pod-nodeaffinity-required   1/1     Running   0          2m11s   10.244.2.27   k8snode1   <none>           <none>

[root@k8smaster k8s]# kubectl describe pods pod-nodeaffinity-required -n dev
...
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned dev/pod-nodeaffinity-required to k8snode1
  Normal  Pulled     11m        kubelet, k8snode1  Container image "nginx:1.17.1" already present on machine
  Normal  Created    11m        kubelet, k8snode1  Created container nginx
  Normal  Started    11m        kubelet, k8snode1  Started container nginx

接下来再演示一下requiredDuringSchedulingIgnoredDuringExecution ,
创建pod-nodeaffinity-preferred.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-nodeaffinity-preferred
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
  affinity:  #亲和性设置
    nodeAffinity: #设置node亲和性
      preferredDuringSchedulingIgnoredDuringExecution: # 软限制
      - weight: 1
        preference:
          matchExpressions: # 匹配env的值在["xxx","yyy"]中的标签(当前环境没有)
          - key: nodeenv
            operator: In
            values: ["xxx","yyy"]

演示:

# 创建pod
[root@k8smaster k8s]# kubectl create -f pod-nodeaffinity-preferred.yaml 
pod/pod-nodeaffinity-preferred created

# 查看pod状态 (运行成功)
[root@k8smaster k8s]# kubectl get pod pod-nodeaffinity-preferred -n dev
NAME                         READY   STATUS    RESTARTS   AGE
pod-nodeaffinity-preferred   1/1     Running   0          40s
NodeAffinity规则设置的注意事项:
1 如果同时定义了nodeSelector和nodeAffinity,那么必须两个条件都得到满足,Pod才能运行在指定的Node上
2 如果nodeAffinity指定了多个nodeSelectorTerms,那么只需要其中一个能够匹配成功即可
3 如果一个nodeSelectorTerms中有多个matchExpressions ,则一个节点必须满足所有的才能匹配成功
4 如果一个pod所在的Node在Pod运行期间其标签发生了改变,不再符合该Pod的节点亲和性需求,则系统将忽略此变化

PodAffinity

PodAffinity主要实现以运行的Pod为参照,实现让新创建的Pod跟参照pod在一个区域的功能。
首先来看一下PodAffinity的可配置项:

pod.spec.affinity.podAffinity
  requiredDuringSchedulingIgnoredDuringExecution  硬限制
    namespaces       指定参照pod的namespace
    topologyKey      指定调度作用域
    labelSelector    标签选择器
      matchExpressions  按节点标签列出的节点选择器要求列表(推荐)
        key    键
        values 值
        operator 关系符 支持In, NotIn, Exists, DoesNotExist.
      matchLabels    指多个matchExpressions映射的内容
  preferredDuringSchedulingIgnoredDuringExecution 软限制
    podAffinityTerm  选项
      namespaces      
      topologyKey
      labelSelector
        matchExpressions  
          key    键
          values 值
          operator
        matchLabels 
    weight 倾向权重,在范围1-100
topologyKey用于指定调度时作用域,例如:
    如果指定为kubernetes.io/hostname,那就是以Node节点为区分范围
	如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分

接下来,演示下requiredDuringSchedulingIgnoredDuringExecution,
1)首先创建一个参照Pod,pod-podaffinity-target.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: pod-podaffinity-target
  namespace: dev
  labels:
    podenv: pro #设置标签
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
  nodeName: k8snode1 # 将目标pod名确指定到k8snode1上
# 启动目标pod
[root@k8smaster k8s]# kubectl create -f pod-podaffinity-target.yaml 
pod/pod-podaffinity-target created

# 查看pod状况
[root@k8smaster k8s]# kubectl get pods  pod-podaffinity-target -n dev -o wide --show-labels
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES   LABELS
pod-podaffinity-target   1/1     Running   0          45s   10.244.2.29   k8snode1   <none>           <none>            podenv=pro

2)创建pod-podaffinity-required.yaml,内容如下:

apiVersion: v1
kind: Pod
metadata:
  name: pod-podaffinity-required
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
  affinity:  #亲和性设置
    podAffinity: #设置pod亲和性
      requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
      - labelSelector:
          matchExpressions: # 匹配env的值在["xxx","yyy"]中的标签
          - key: podenv
            operator: In
            values: ["xxx","yyy"]
        topologyKey: kubernetes.io/hostname

上面配置表达的意思是:新Pod必须要与拥有标签nodeenv=xxx或者nodeenv=yyy的pod在同一Node上,显然现在没有这样pod,接下来,运行测试一下。

# 启动pod
[root@k8smaster k8s]# kubectl create -f pod-podaffinity-required.yaml 
pod/pod-podaffinity-required created

# 查看pod状态,发现未运行
[root@k8smaster k8s]# kubectl get pods pod-podaffinity-required -n dev
NAME                       READY   STATUS    RESTARTS   AGE
pod-podaffinity-required   0/1     Pending   0          28s

# 查看详细信息
[root@k8smaster k8s]# kubectl describe pods pod-podaffinity-required  -n dev
......
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules, 2 node(s) didn't match pod affinity/anti-affinity.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules, 2 node(s) didn't match pod affinity/anti-affinity

# 删除pod
[root@k8smaster k8s]# kubectl delete -f  pod-podaffinity-required.yaml
pod "pod-podaffinity-required" deleted

# 接下来修改  values: ["xxx","yyy"]----->values:["pro","yyy"]
# 意思是:新Pod必须要与拥有标签nodeenv=xxx或者nodeenv=yyy的pod在同一Node上
[root@k8smaster k8s]# vim pod-podaffinity-required.yaml

# 然后重新创建pod,查看效果
[root@k8smaster k8s]# kubectl create -f pod-podaffinity-required.yaml 
pod/pod-podaffinity-required created

# 发现此时Pod运行正常
[root@k8smaster k8s]# kubectl get pods pod-podaffinity-required -n dev -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
pod-podaffinity-required   1/1     Running   0          41s   10.244.2.30   k8snode1   <none>           <none>

关于PodAffinity的 preferredDuringSchedulingIgnoredDuringExecution,这里不再演示。

PodAntiAffinity

PodAntiAffinity主要实现以运行的Pod为参照,让新创建的Pod跟参照pod不在一个区域中的功能。
它的配置方式和选项跟PodAffinty是一样的,这里不再做详细解释,直接做一个测试案例。
1)继续使用上个案例中目标pod

[root@k8smaster k8s]# kubectl get pods pod-podaffinity-target -n dev --show-labels
NAME                     READY   STATUS    RESTARTS   AGE   LABELS
pod-podaffinity-target   1/1     Running   0          13m   podenv=pro

2)创建pod-podantiaffinity-required.yaml,内容如下:

apiVersion: v1
kind: Pod
metadata:
  name: pod-podantiaffinity-required
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
  affinity:  #亲和性设置
    podAntiAffinity: #设置pod亲和性
      requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
      - labelSelector:
          matchExpressions: # 匹配podenv的值在["pro"]中的标签
          - key: podenv
            operator: In
            values: ["pro"]
        topologyKey: kubernetes.io/hostname

上面配置表达的意思是:新Pod必须要与拥有标签nodeenv=pro的pod不在同一Node上,运行测试一下。

# 创建pod
[root@k8smaster k8s]# kubectl create -f pod-podantiaffinity-required.yaml 
pod/pod-podantiaffinity-required created

# 查看pod
# 发现调度到了node2上
[root@k8smaster k8s]# kubectl get pods pod-podantiaffinity-required -n dev -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
pod-podantiaffinity-required   1/1     Running   0          64s   10.244.1.29   k8snode2   <none>           <none>

1.3 污点和容忍

污点(Taints)

前面的调度方式都是站在Pod的角度上,通过在Pod上添加属性,来确定Pod是否要调度到指定的Node上,其实我们也可以站在Node的角度上,通过在Node上添加污点属性,来决定是否允许Pod调度过来。
Node被设置上污点之后就和Pod之间存在了一种相斥的关系,进而拒绝Pod调度进来,甚至可以将已经存在的Pod驱逐出去。
污点的格式为:key=value:effect, key和value是污点的标签,effect描述污点的作用,支持如下三个选项:

  • PreferNoSchedule:kubernetes将尽量避免把Pod调度到具有该污点的Node上,除非没有其他节点可调度
  • NoSchedule:kubernetes将不会把Pod调度到具有该污点的Node上,但不会影响当前Node上已存在的Pod
  • NoExecute:kubernetes将不会把Pod调度到具有该污点的Node上,同时也会将Node上已存在的Pod驱离


使用kubectl设置和去除污点的命令示例如下:

# 设置污点
kubectl taint nodes node1 key=value:effect

# 去除污点
kubectl taint nodes node1 key:effect-

# 去除所有污点
kubectl taint nodes node1 key-

接下来,演示下污点的效果:

  1. 准备节点k8snode2(为了演示效果更加明显,暂时停止k8snode1节点)
[root@k8smaster k8s]# kubectl get nodes
NAME        STATUS     ROLES    AGE   VERSION
k8smaster   Ready      master   8d    v1.18.0
k8snode1    NotReady   <none>   8d    v1.18.0
k8snode2    Ready      <none>   8d    v1.18.0
  1. 为k8snode2节点设置一个污点: tag=slfx:PreferNoSchedule;然后创建pod1( pod1 可以 )
  2. 修改为k8snode2节点设置一个污点: tag=slfx:NoSchedule;然后创建pod2( pod1 正常 pod2 失败 )
  3. 修改为k8snode2节点设置一个污点: tag=slfx:NoExecute;然后创建pod3 ( 3个pod都失败 )
# 为k8snode2设置污点(PreferNoSchedule)
[root@k8smaster k8s]# kubectl taint nodes k8snode2 tag=slfx:PreferNoSchedule
node/k8snode2 tainted

# 创建pod1
[root@k8smaster k8s]# kubectl run taint1 --image=nginx:1.17.1 -n dev
pod/taint1 created
[root@k8smaster k8s]# kubectl get pods taint1 -n dev -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
taint1   1/1     Running   0          20s   10.244.1.30   k8snode2   <none>           <none>  

# 为k8snode2设置污点(取消PreferNoSchedule,设置NoSchedule)
[root@k8smaster k8s]# kubectl taint nodes k8snode2 tag:PreferNoSchedule-
node/k8snode2 untainted
[root@k8smaster k8s]#  kubectl taint nodes k8snode2 tag=slfx:NoSchedule
node/k8snode2 tainted

# 创建pod2
[root@k8smaster k8s]# kubectl run taint2 --image=nginx:1.17.1 -n dev
pod/taint2 created

[root@k8smaster k8s]#  kubectl get pods taint2 -n dev -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
taint2   0/1     Pending   0          16s   <none>   <none>   <none>           <none>

# 查看
[root@k8smaster k8s]# kubectl describe pods taint2 -n dev
...
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate, 1 node(s) had taint {tag: slfx}, that the pod didn't tolerate.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate, 1 node(s) had taint {tag: slfx}, that the pod didn't tolerate.


# 为k8snode2设置污点(取消NoSchedule,设置NoExecute)
[root@k8smaster k8s]# kubectl taint nodes k8snode2 tag:NoSchedule-
node/k8snode2 untainted
[root@k8smaster k8s]# kubectl taint nodes k8snode2 tag=slfx:NoExecute
node/k8snode2 tainted

# 创建pod3
[root@k8smaster k8s]# kubectl run taint3 --image=nginx:1.17.1 -n dev
[root@k8smaster ~]# kubectl get pods taint3 -n dev -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
taint3   0/1     Pending   0          9s    <none>   <none>   <none>           <none>
小提示:
使用kubeadm搭建的集群,默认就会给master节点添加一个污点标记,所以pod就不会调度到master节点上.

[root@k8smaster k8s]# kubectl describe node k8smaster
...
Taints:             node-role.kubernetes.io/master:NoSchedule

容忍(Toleration)

上面介绍了污点的作用,我们可以在node上添加污点用于拒绝pod调度上来,但是如果就是想将一个pod调度到一个有污点的node上去,这时候应该怎么做呢?这就要使用到容忍

在这里插入图片描述

污点就是拒绝,容忍就是忽略,Node通过污点拒绝pod调度上去,Pod通过容忍忽略拒绝

下面先通过一个案例看下效果:

  1. 上一小节,已经在k8snode2节点上打上了NoExecute的污点,此时pod是调度不上去的
[root@k8smaster ~]# kubectl get pods taint3 -n dev -o wide
NAME     READY   STATUS    RESTARTS   AGE     IP       NODE     NOMINATED NODE   READINESS GATES
taint3   0/1     Pending   0          7m32s   <none>   <none>   <none>           <none>

[root@k8smaster ~]# kubectl describe node k8snode2
...
Taints:             tag=slfx:NoExecute
  1. 本小节,可以通过给pod添加容忍,然后将其调度上去

创建pod-toleration.yaml,内容如下

apiVersion: v1
kind: Pod
metadata:
  name: pod-toleration
  namespace: dev
spec:
  containers:
  - name: nginx
    image: nginx:1.17.1
  tolerations:      # 添加容忍
  - key: "tag"        # 要容忍的污点的key
    operator: "Equal" # 操作符
    value: "slfx"    # 容忍的污点的value
    effect: "NoExecute"   # 添加容忍的规则,这里必须和标记的污点规则相同
# 添加容忍之前的pod
[root@k8smaster k8s]# kubectl get pods -n dev
NAME             READY   STATUS    RESTARTS   AGE
pod-toleration   0/1     Pending   0          8s
taint3           0/1     Pending   0          10m         

# 添加容忍之后的pod
[root@k8smaster k8s]# kubectl get pods -n dev -o wide
NAME             READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
pod-toleration   1/1     Running   0          2m3s   10.244.1.32   k8snode2   <none>           <none>
taint3           0/1     Pending   0          14m    <none>        <none>     <none>           <none>

下面看一下容忍的详细配置:

[root@k8smaster k8s]# kubectl explain pod.spec.tolerations
......
FIELDS:
   key       # 对应着要容忍的污点的键,空意味着匹配所有的键
   value     # 对应着要容忍的污点的值
   operator  # key-value的运算符,支持Equal和Exists(默认)
   effect    # 对应污点的effect,空意味着匹配所有影响
   tolerationSeconds   # 容忍时间, 当effect为NoExecute时生效,表示pod在Node上的停留时间

2、Pod控制器详解

2.1 Pod控制器介绍

Pod是kubernetes的最小管理单元,在kubernetes中,按照pod的创建方式可以将其分为两类:

  • 自主式pod:kubernetes直接创建出来的Pod,这种pod删除后就没有了,也不会重建
  • 控制器创建的pod:kubernetes通过控制器创建的pod,这种pod删除了之后还会自动重建

什么是Pod控制器
Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod。

在kubernetes中,有很多类型的pod控制器,每种都有自己的适合的场景,常见的有下面这些:

  • ReplicationController:比较原始的pod控制器,已经被废弃,由ReplicaSet替代
  • ReplicaSet:保证副本数量一直维持在期望值,并支持pod数量扩缩容,镜像版本升级
  • Deployment:通过控制ReplicaSet来控制Pod,并支持滚动升级、回退版本
  • Horizontal Pod Autoscaler:可以根据集群负载自动水平调整Pod的数量,实现削峰填谷
  • DaemonSet:在集群中的指定Node上运行且仅运行一个副本,一般用于守护进程类的任务(收集日志)
  • Job:它创建出来的pod只要完成任务就立即退出,不需要重启或重建,用于执行一次性任务
  • Cronjob:它创建的Pod负责周期性任务控制,不需要持续后台运行(数据备份)
  • StatefulSet:管理有状态应用

2.2 ReplicaSet(RS)

ReplicaSet的主要作用是保证一定数量的pod正常运行,它会持续监听这些Pod的运行状态,一旦Pod发生故障,就会重启或重建。同时它还支持对pod数量的扩缩容和镜像版本的升降级。

ReplicaSet的资源清单文件:

apiVersion: apps/v1 # 版本号
kind: ReplicaSet # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: rs
spec: # 详情描述
  replicas: 3 # 副本数量
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

在这里面,需要新了解的配置项就是spec下面几个选项:

  • replicas:指定副本数量,其实就是当前rs创建出来的pod的数量,默认为1
  • selector:选择器,它的作用是建立pod控制器和pod之间的关联关系,采用的Label Selector机制在pod模板上定义label,在控制器上定义选择器,就可以表明当前控制器能管理哪些pod了
  • template:模板,就是当前控制器创建pod所使用的模板板,里面其实就是前一章学过的pod的定义

创建ReplicaSet

创建pc-replicaset.yaml文件,内容如下:

apiVersion: apps/v1
kind: ReplicaSet   
metadata:
  name: pc-replicaset
  namespace: dev
spec:
  replicas: 3
  selector: 
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
# 创建rs
[root@k8smaster k8s]# kubectl create -f pc-replicaset.yaml 
replicaset.apps/pc-replicaset created

# 查看rs
# DESIRED:期望副本数量  
# CURRENT:当前副本数量  
# READY:已经准备好提供服务的副本数量
[root@k8smaster k8s]# kubectl get rs -n dev -o wide
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
pc-replicaset   3         3         3       15s   nginx        nginx:1.17.1   app=nginx-pod

# 查看当前控制器创建出来的pod
# 这里发现控制器创建出来的pod的名称是在控制器名称后面拼接了-xxxxx随机码
[root@k8smaster k8s]# kubectl get pods -n dev 
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-22f2k   1/1     Running   0          52s
pc-replicaset-2lk6p   1/1     Running   0          52s
pc-replicaset-j29j9   1/1     Running   0          52s

扩缩容

# 编辑rs的副本数量,修改spec:replicas: 6即可
[root@k8smaster k8s]# kubectl edit rs pc-replicaset -n dev
replicaset.apps/pc-replicaset edited

# 查看pod
[root@k8smaster k8s]# kubectl get pods -n dev 
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-22f2k   1/1     Running   0          4m11s
pc-replicaset-2lk6p   1/1     Running   0          4m11s
pc-replicaset-5wf2n   1/1     Running   0          6s
pc-replicaset-c6g5q   1/1     Running   0          6s
pc-replicaset-j29j9   1/1     Running   0          4m11s
pc-replicaset-pbq4z   1/1     Running   0          6s

# 当然也可以直接使用命令实现
# 使用scale命令实现扩缩容, 后面--replicas=n直接指定目标数量即可
[root@k8smaster k8s]# kubectl scale rs pc-replicaset --replicas=2 -n dev
replicaset.apps/pc-replicaset scaled

# 命令运行完毕,立即查看,发现已经有4个开始准备退出了
[root@k8smaster k8s]# kubectl get pods -n dev 
NAME                  READY   STATUS        RESTARTS   AGE
pc-replicaset-22f2k   1/1     Running       0          4m48s
pc-replicaset-2lk6p   0/1     Terminating   0          4m48s
pc-replicaset-5wf2n   0/1     Terminating   0          43s
pc-replicaset-c6g5q   0/1     Terminating   0          43s
pc-replicaset-j29j9   1/1     Running       0          4m48s
pc-replicaset-pbq4z   0/1     Terminating   0          43s

#稍等片刻,就只剩下2个了
[root@k8smaster k8s]# kubectl get pods -n dev 
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-22f2k   1/1     Running   0          5m15s
pc-replicaset-j29j9   1/1     Running   0          5m15s
[root@k8smaster k8s]# kubectl get rs -n dev
NAME            DESIRED   CURRENT   READY   AGE
pc-replicaset   2         2         2       6m15s

镜像升级

# 编辑rs的容器镜像 - image: nginx:1.17.2
[root@k8smaster k8s]# kubectl edit rs pc-replicaset -n dev
replicaset.apps/pc-replicaset edited

# 再次查看,发现镜像版本已经变更了
[root@k8smaster k8s]# kubectl get rs -n dev -o wide
NAME            DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
pc-replicaset   2         2         2       7m51s   nginx        nginx:1.17.2   app=nginx-pod

# 同样的道理,也可以使用命令完成这个工作
# kubectl set image rs rs名称 容器=镜像版本 -n namespace
[root@k8smaster k8s]# kubectl set image rs pc-replicaset nginx=nginx:1.17.1  -n dev
replicaset.apps/pc-replicaset image updated

# 再次查看,发现镜像版本已经变更了
[root@k8smaster k8s]# kubectl get rs -n dev -o wide
NAME            DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
pc-replicaset   2         2         2       8m30s   nginx        nginx:1.17.1   app=nginx-pod

删除ReplicaSet

# 使用kubectl delete命令会删除此RS以及它管理的Pod
# 在kubernetes删除RS前,会将RS的replicasclear调整为0,等待所有的Pod被删除后,在执行RS对象的删除
[root@k8smaster k8s]# kubectl delete rs pc-replicaset -n dev
replicaset.apps "pc-replicaset" deleted
[root@k8smaster k8s]# kubectl get pod -n dev -o wide
No resources found in dev namespace.

# 如果希望仅仅删除RS对象(保留Pod),可以使用kubectl delete命令时添加--cascade=false选项(不推荐)。
[root@k8smaster k8s]# kubectl delete rs pc-replicaset -n dev --cascade=false
replicaset.apps "pc-replicaset" deleted
[root@k8smaster k8s]# kubectl get pods -n dev
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-cl82j   1/1     Running   0          75s
pc-replicaset-dslhb   1/1     Running   0          75s

# 也可以使用yaml直接删除(推荐)
[root@k8smaster k8s]# kubectl delete -f pc-replicaset.yaml
replicaset.apps "pc-replicaset" deleted

2.3 Deployment(Deploy)

为了更好的解决服务编排的问题,kubernetes在V1.2版本开始,引入了Deployment控制器。值得一提的是,这种控制器并不直接管理pod,而是通过管理ReplicaSet来简介管理Pod,即:Deployment管理ReplicaSet,ReplicaSet管理Pod。所以Deployment比ReplicaSet功能更加强大。

Deployment主要功能有下面几个:

  • 支持ReplicaSet的所有功能
  • 支持发布的停止、继续
  • 支持滚动升级和回滚版本

Deployment的资源清单文件:

apiVersion: apps/v1 # 版本号
kind: Deployment # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: deploy
spec: # 详情描述
  replicas: 3 # 副本数量
  revisionHistoryLimit: 3 # 保留历史版本 默认是10
  paused: false # 暂停部署,默认是false
  progressDeadlineSeconds: 600 # 部署超时时间(s),默认是600
  strategy: # 策略
    type: RollingUpdate # 滚动更新策略
    rollingUpdate: # 滚动更新
      maxSurge: 30% # 最大额外可以存在的副本数,可以为百分比,也可以为整数
      maxUnavailable: 30% # 最大不可用状态的 Pod 的最大值,可以为百分比,也可以为整数
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

创建deployment

创建pc-deployment.yaml,内容如下:

apiVersion: apps/v1
kind: Deployment      
metadata:
  name: pc-deployment
  namespace: dev
spec: 
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
# 创建deployment
[root@k8smaster k8s]# kubectl create -f pc-deployment.yaml 
deployment.apps/pc-deployment created

# 查看deployment
# UP-TO-DATE 最新版本的pod的数量
# AVAILABLE  当前可用的pod的数量
[root@k8smaster k8s]# kubectl get deployment -n dev
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
pc-deployment   3/3     3            3           29s

# 查看rs
# 发现rs的名称是在原来deployment的名字后面添加了一个10位数的随机串
[root@k8smaster k8s]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-858db84f89   3         3         3       45s

# 查看pod
[root@k8smaster k8s]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-858db84f89-2hm27   1/1     Running   0          61s
pc-deployment-858db84f89-njn89   1/1     Running   0          61s
pc-deployment-858db84f89-x8n9g   1/1     Running   0          61s

扩缩容

# 变更副本数量为5个
[root@k8smaster k8s]# kubectl scale deploy pc-deployment --replicas=5  -n dev
deployment.apps/pc-deployment scaled

# 查看deployment
[root@k8smaster k8s]# kubectl get deploy pc-deployment -n dev
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
pc-deployment   5/5     5            5           2m9s

# 查看pod
[root@k8smaster k8s]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-858db84f89-2hm27   1/1     Running   0          2m23s
pc-deployment-858db84f89-n7fgx   1/1     Running   0          29s
pc-deployment-858db84f89-njn89   1/1     Running   0          2m23s
pc-deployment-858db84f89-tblrv   1/1     Running   0          29s
pc-deployment-858db84f89-x8n9g   1/1     Running   0          2m23s

# 编辑deployment的副本数量,修改spec:replicas: 4即可
[root@k8smaster k8s]# kubectl edit deploy pc-deployment -n dev
deployment.apps/pc-deployment edited

# 查看pod
[root@k8smaster k8s]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-858db84f89-2hm27   1/1     Running   0          3m8s
pc-deployment-858db84f89-n7fgx   1/1     Running   0          74s
pc-deployment-858db84f89-njn89   1/1     Running   0          3m8s
pc-deployment-858db84f89-x8n9g   1/1     Running   0          3m8s

镜像更新

deployment支持两种更新策略:重建更新和滚动更新,可以通过strategy指定策略类型,支持两个属性:

strategy:指定新的Pod替换旧的Pod的策略, 支持两个属性:
  type:指定策略类型,支持两种策略
    Recreate:在创建出新的Pod之前会先杀掉所有已存在的Pod
    RollingUpdate:滚动更新,就是杀死一部分,就启动一部分,在更新过程中,存在两个版本Pod
  rollingUpdate:当type为RollingUpdate时生效,用于为RollingUpdate设置参数,支持两个属性:
    maxUnavailable:用来指定在升级过程中不可用Pod的最大数量,默认为25%。
    maxSurge: 用来指定在升级过程中可以超过期望的Pod的最大数量,默认为25%。

重建更新
  1. 编辑pc-deployment.yaml,在spec节点下添加更新策略
spec:
  strategy: # 策略
    type: Recreate # 重建更新
[root@k8smaster k8s]# vim pc-deployment.yaml 
[root@k8smaster k8s]# kubectl apply -f pc-deployment.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/pc-deployment configured
  1. 创建deploy进行验证
# 变更镜像
[root@k8smaster k8s]# kubectl set image deployment pc-deployment nginx=nginx:1.17.2 -n dev
deployment.apps/pc-deployment image updated

# 新开一个窗口观察更新的过程
[root@k8smaster k8s]# kubectl get pods -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-858db84f89-2hm27   1/1     Running   0          8m46s
pc-deployment-858db84f89-njn89   1/1     Running   0          8m46s
pc-deployment-858db84f89-x8n9g   1/1     Running   0          8m46s
pc-deployment-858db84f89-x8n9g   1/1     Terminating   0          9m18s
pc-deployment-858db84f89-njn89   1/1     Terminating   0          9m18s
pc-deployment-858db84f89-2hm27   1/1     Terminating   0          9m18s
pc-deployment-858db84f89-2hm27   0/1     Terminating   0          9m19s
pc-deployment-858db84f89-x8n9g   0/1     Terminating   0          9m19s
pc-deployment-858db84f89-njn89   0/1     Terminating   0          9m19s
pc-deployment-858db84f89-njn89   0/1     Terminating   0          9m26s
pc-deployment-858db84f89-njn89   0/1     Terminating   0          9m26s
pc-deployment-858db84f89-2hm27   0/1     Terminating   0          9m26s
pc-deployment-858db84f89-2hm27   0/1     Terminating   0          9m26s
pc-deployment-858db84f89-x8n9g   0/1     Terminating   0          9m26s
pc-deployment-858db84f89-x8n9g   0/1     Terminating   0          9m26s
pc-deployment-6c78d7875b-sskhk   0/1     Pending       0          0s
pc-deployment-6c78d7875b-sskhk   0/1     Pending       0          0s
pc-deployment-6c78d7875b-q9mk6   0/1     Pending       0          0s
pc-deployment-6c78d7875b-d2w4l   0/1     Pending       0          0s
pc-deployment-6c78d7875b-q9mk6   0/1     Pending       0          0s
pc-deployment-6c78d7875b-d2w4l   0/1     Pending       0          0s
pc-deployment-6c78d7875b-sskhk   0/1     ContainerCreating   0          1s
pc-deployment-6c78d7875b-q9mk6   0/1     ContainerCreating   0          1s
pc-deployment-6c78d7875b-d2w4l   0/1     ContainerCreating   0          2s
pc-deployment-6c78d7875b-q9mk6   1/1     Running             0          40s
pc-deployment-6c78d7875b-d2w4l   1/1     Running             0          56s
pc-deployment-6c78d7875b-sskhk   1/1     Running             0          72s

滚动更新
  1. 编辑pc-deployment.yaml,在spec节点下添加更新策略
spec:
  strategy: # 策略
    type: RollingUpdate # 滚动更新策略
    rollingUpdate:
      maxSurge: 25% 
      maxUnavailable: 25%
[root@k8smaster k8s]# vim pc-deployment.yaml 
[root@k8smaster k8s]# kubectl apply -f pc-deployment.yaml 
deployment.apps/pc-deployment configured
  1. 创建deploy进行验证
[root@k8smaster k8s]# kubectl set image deployment pc-deployment nginx=nginx:1.17.3 -n dev
deployment.apps/pc-deployment image updated

# 开启端口监听
[root@k8smaster k8s]# kubectl get pods -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-858db84f89-ksrs9   1/1     Running   0          46s
pc-deployment-858db84f89-xlrfl   1/1     Running   0          47s
pc-deployment-858db84f89-z8jqg   1/1     Running   0          49s
pc-deployment-57df6f8b8c-rkfqk   0/1     Pending   0          0s
pc-deployment-57df6f8b8c-rkfqk   0/1     Pending   0          0s
pc-deployment-57df6f8b8c-rkfqk   0/1     ContainerCreating   0          0s
pc-deployment-57df6f8b8c-rkfqk   1/1     Running             0          37s
pc-deployment-858db84f89-ksrs9   1/1     Terminating         0          97s
pc-deployment-57df6f8b8c-zxbvt   0/1     Pending             0          0s
pc-deployment-57df6f8b8c-zxbvt   0/1     Pending             0          0s
pc-deployment-57df6f8b8c-zxbvt   0/1     ContainerCreating   0          0s
pc-deployment-858db84f89-ksrs9   0/1     Terminating         0          99s
pc-deployment-858db84f89-ksrs9   0/1     Terminating         0          100s
pc-deployment-858db84f89-ksrs9   0/1     Terminating         0          100s
pc-deployment-57df6f8b8c-zxbvt   1/1     Running             0          3s
pc-deployment-858db84f89-xlrfl   1/1     Terminating         0          101s
pc-deployment-57df6f8b8c-pw4kt   0/1     Pending             0          0s
pc-deployment-57df6f8b8c-pw4kt   0/1     Pending             0          0s
pc-deployment-57df6f8b8c-pw4kt   0/1     ContainerCreating   0          0s
pc-deployment-858db84f89-xlrfl   0/1     Terminating         0          102s
pc-deployment-57df6f8b8c-pw4kt   1/1     Running             0          1s
pc-deployment-858db84f89-z8jqg   1/1     Terminating         0          104s
pc-deployment-858db84f89-z8jqg   0/1     Terminating         0          105s
pc-deployment-858db84f89-xlrfl   0/1     Terminating         0          103s
pc-deployment-858db84f89-xlrfl   0/1     Terminating         0          103s
pc-deployment-858db84f89-z8jqg   0/1     Terminating         0          114s
pc-deployment-858db84f89-z8jqg   0/1     Terminating         0          114s

# 至此,新版本的pod创建完毕,就版本的pod销毁完毕
# 中间过程是滚动进行的,也就是边销毁边创建

滚动更新的过程:

在这里插入图片描述


镜像更新中rs的变化

# 查看rs,发现原来的rs的依旧存在,只是pod数量变为了0,而后又新产生了一个rs,pod数量为4
# 其实这就是deployment能够进行版本回退的奥妙所在,后面会详细解释
[root@k8smaster k8s]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-57df6f8b8c   3         3         3       14h
pc-deployment-6c78d7875b   0         0         0       14h
pc-deployment-858db84f89   0         0         0       14h

版本回退

deployment支持版本升级过程中的暂停、继续功能以及版本回退等诸多功能,下面具体来看.
kubectl rollout: 版本升级相关功能,支持下面的选项:

  • status 显示当前升级状态
  • history 显示 升级历史记录
  • pause 暂停版本升级过程
  • resume 继续已经暂停的版本升级过程
  • restart 重启版本升级过程
  • undo 回滚到上一级版本(可以使用–to-revision回滚到指定版本)
[root@k8smaster k8s]# kubectl delete -f pc-deployment.yaml 
deployment.apps "pc-deployment" deleted

# record 记录当前版本的变化
[root@k8smaster k8s]# kubectl create -f pc-deployment.yaml --record
deployment.apps/pc-deployment created

# 查看
[root@k8smaster k8s]# kubectl get deploy,rs,pod -n dev
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pc-deployment   3/3     3            3           20s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/pc-deployment-858db84f89   3         3         3       20s

NAME                                 READY   STATUS    RESTARTS   AGE
pod/pc-deployment-858db84f89-4ntf5   1/1     Running   0          20s
pod/pc-deployment-858db84f89-d2dt6   1/1     Running   0          20s
pod/pc-deployment-858db84f89-kzg77   1/1     Running   0          20s

# 升级镜像,观察rs和pod的变化
[root@k8smaster k8s]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev
deployment.apps/pc-deployment image updated

#分别开启新的窗口监听查看rs和pod的变化
[root@k8smaster ~]# kubectl get pods -n dev -w

[root@k8smaster ~]# kubectl get rs -n dev -w

# 查看当前的rs
[root@k8smaster k8s]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-6c78d7875b   3         3         3       107s
pc-deployment-858db84f89   0         0         0       4m11s


# 查看当前升级版本的状态
[root@k8smaster k8s]# kubectl rollout status deploy pc-deployment -n dev
deployment "pc-deployment" successfully rolled out

# 查看升级历史记录
[root@k8smaster k8s]# kubectl rollout history deploy pc-deployment -n dev
deployment.apps/pc-deployment 
REVISION  CHANGE-CAUSE
1         kubectl create --filename=pc-deployment.yaml --record=true
2         kubectl create --filename=pc-deployment.yaml --record=true
3         kubectl create --filename=pc-deployment.yaml --record=true

# 可以发现有3次版本记录,说明完成过两次升级

# 查看当前的nginx的版本
[root@k8smaster k8s]# kubectl get deployment,rs -o wide -n dev
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES         SELECTOR
deployment.apps/pc-deployment   3/3     3            3           9m7s   nginx        nginx:1.17.3   app=nginx-pod

NAME                                       DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
replicaset.apps/pc-deployment-57df6f8b8c   3         3         3       83s     nginx        nginx:1.17.3   app=nginx-pod,pod-template-hash=57df6f8b8c
replicaset.apps/pc-deployment-6c78d7875b   0         0         0       6m43s   nginx        nginx:1.17.2   app=nginx-pod,pod-template-hash=6c78d7875b
replicaset.apps/pc-deployment-858db84f89   0         0         0       9m7s    nginx        nginx:1.17.1   app=nginx-pod,pod-template-hash=858db84f89

# 版本回滚
# 这里直接使用--to-revision=1回滚到了1版本, 如果省略这个选项,就是回退到上个版本,就是2版本
[root@k8smaster k8s]# kubectl rollout undo deployment pc-deployment --to-revision=1 -n dev
deployment.apps/pc-deployment rolled back

# 查看发现,通过nginx镜像版本可以发现到了第一版
[root@k8smaster k8s]# kubectl get deploy -n dev -o wide
NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
pc-deployment   3/3     3            3           11m   nginx        nginx:1.17.1   app=nginx-pod

# 查看版本历史 1变成了4
[root@k8smaster k8s]# kubectl rollout history deploy pc-deployment -n dev
deployment.apps/pc-deployment 
REVISION  CHANGE-CAUSE
2         kubectl create --filename=pc-deployment.yaml --record=true
3         kubectl create --filename=pc-deployment.yaml --record=true
4         kubectl create --filename=pc-deployment.yaml --record=true

# 查看rs,发现第三个rs中有3个pod运行
# 其实deployment之所以可是实现版本的回滚,就是通过记录下历史rs来实现的,
# 一旦想回滚到哪个版本,只需要将当前版本pod数量降为0,然后将回滚版本的pod提升为目标数量就可以了
[root@k8smaster k8s]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-57df6f8b8c   0         0         0       4m23s
pc-deployment-6c78d7875b   0         0         0       9m43s
pc-deployment-858db84f89   3         3         3       12m

金丝雀发布

Deployment控制器支持控制更新过程中的控制,如“暂停(pause)”或“继续(resume)”更新操作。
比如有一批新的Pod资源创建完成后立即暂停更新过程,此时,仅存在一部分新版本的应用,主体部分还是旧的版本。然后,再筛选一小部分的用户请求路由到新版本的Pod应用,继续观察能否稳定地按期望的方式运行。确定没问题之后再继续完成余下的Pod资源滚动更新,否则立即回滚更新操作。这就是所谓的金丝雀发布。

[root@k8smaster k8s]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-57df6f8b8c   0         0         0       8m30s
pc-deployment-6c78d7875b   0         0         0       13m
pc-deployment-858db84f89   3         3         3       16m

# 更新deployment的版本,并配置暂停deployment
[root@k8smaster k8s]# kubectl set image deploy pc-deployment nginx=nginx:1.17.4 -n dev && kubectl rollout pause deployment pc-deployment  -n dev
deployment.apps/pc-deployment image updated
deployment.apps/pc-deployment paused

#观察更新状态
[root@k8smaster k8s]# kubectl rollout status deploy pc-deployment -n dev
Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...

# 监控更新的过程,可以看到已经新增了一个资源,但是并未按照预期的状态去删除一个旧的资源,就是因为使用了pause暂停命令

[root@k8smaster k8s]#  kubectl get rs -n dev -o wide
NAME                       DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
pc-deployment-57df6f8b8c   0         0         0       11m     nginx        nginx:1.17.3   app=nginx-pod,pod-template-hash=57df6f8b8c
pc-deployment-6c78d7875b   0         0         0       16m     nginx        nginx:1.17.2   app=nginx-pod,pod-template-hash=6c78d7875b
pc-deployment-849d4778f4   1         1         1       2m43s   nginx        nginx:1.17.4   app=nginx-pod,pod-template-hash=849d4778f4
pc-deployment-858db84f89   3         3         3       19m     nginx        nginx:1.17.1   app=nginx-pod,pod-template-hash=858db84f89

[root@k8smaster k8s]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-849d4778f4-qs4v4   1/1     Running   0          3m6s
pc-deployment-858db84f89-g9b4l   1/1     Running   0          8m5s
pc-deployment-858db84f89-vfcl4   1/1     Running   0          8m4s
pc-deployment-858db84f89-vpjpj   1/1     Running   0          8m7s


# 确保更新的pod没问题了,继续更新
[root@k8smaster k8s]# kubectl rollout resume deploy pc-deployment -n dev
deployment.apps/pc-deployment resumed


# 查看最后的更新情况
[root@k8smaster k8s]# kubectl get rs -n dev -o wide
NAME                       DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
pc-deployment-57df6f8b8c   0         0         0       12m     nginx        nginx:1.17.3   app=nginx-pod,pod-template-hash=57df6f8b8c
pc-deployment-6c78d7875b   0         0         0       17m     nginx        nginx:1.17.2   app=nginx-pod,pod-template-hash=6c78d7875b
pc-deployment-849d4778f4   3         3         3       3m46s   nginx        nginx:1.17.4   app=nginx-pod,pod-template-hash=849d4778f4
pc-deployment-858db84f89   0         0         0       20m     nginx        nginx:1.17.1   app=nginx-pod,pod-template-hash=858db84f89

[root@k8smaster k8s]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-849d4778f4-2ths4   1/1     Running   0          36s
pc-deployment-849d4778f4-qs4v4   1/1     Running   0          4m4s
pc-deployment-849d4778f4-rbcps   1/1     Running   0          38s

删除Deployment

# 删除deployment,其下的rs和pod也将被删除
[root@k8smaster k8s]# kubectl delete -f pc-deployment.yaml 
deployment.apps "pc-deployment" deleted

2.4 Horizontal Pod Autoscaler(HPA)

在前面的课程中,我们已经可以实现通过手工执行kubectl scale命令实现Pod扩容或缩容,但是这显然不符合Kubernetes的定位目标–自动化、智能化。 Kubernetes期望可以实现通过监测Pod的使用情况,实现pod数量的自动调整,于是就产生了Horizontal Pod Autoscaler(HPA)这种控制器。
HPA可以获取每个Pod利用率,然后和HPA中定义的指标进行对比,同时计算出需要伸缩的具体值,最后实现Pod的数量的调整。其实HPA与之前的Deployment一样,也属于一种Kubernetes资源对象,它通过追踪分析RC控制的所有目标Pod的负载变化情况,来确定是否需要针对性地调整目标Pod的副本数,这是HPA的实现原理。

在这里插入图片描述


接下来,我们来做一个实验

1 安装metrics-server

metrics-server可以用来收集集群中的资源使用情况

# 安装git
[root@k8smaster k8s]# yum install git -y

# 获取metrics-server, 注意使用的版本
[root@k8smaster k8s]# git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server

# 当前路径
[root@k8smaster metrics-server]# pwd
/opt/k8s/metrics-server

# 修改deployment, 注意修改的是镜像和初始化参数
[root@k8smaster metrics-server]# cd /opt/k8s/metrics-server/deploy/1.8+/

[root@k8smaster 1.8+]# vim metrics-server-deployment.yaml
按图中添加下面选项
hostNetwork: true
image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP

在这里插入图片描述

# 安装metrics-server
[root@k8smaster 1.8+]# kubectl apply -f ./
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

# 查看pod运行情况
[root@k8smaster 1.8+]# kubectl get pod -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-cq2jh            1/1     Running   2          22d
coredns-7ff77c879f-nbdqx            1/1     Running   2          22d
etcd-k8smaster                      1/1     Running   6          31d
kube-apiserver-k8smaster            1/1     Running   7          31d
kube-controller-manager-k8smaster   1/1     Running   23         31d
kube-flannel-ds-4zrg2               1/1     Running   3          31d
kube-flannel-ds-l5thg               1/1     Running   3          31d
kube-proxy-9gpj9                    1/1     Running   3          31d
kube-proxy-jv67d                    1/1     Running   2          31d
kube-proxy-k97jk                    1/1     Running   3          31d
kube-scheduler-k8smaster            1/1     Running   24         31d
metrics-server-5f55b696bd-tx7k7     1/1     Running   0          16s

# 使用kubectl top node 查看资源使用情况
[root@k8smaster 1.8+]# kubectl top node
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8smaster   75m          3%     1144Mi          66%       
k8snode1    22m          1%     395Mi           45%       
k8snode2    15m          0%     217Mi           24%  

[root@k8smaster 1.8+]# kubectl top pod -n kube-system
NAME                                CPU(cores)   MEMORY(bytes)   
coredns-7ff77c879f-cq2jh            2m           19Mi            
coredns-7ff77c879f-nbdqx            2m           18Mi            
etcd-k8smaster                      9m           59Mi            
kube-apiserver-k8smaster            19m          374Mi           
kube-controller-manager-k8smaster   9m           75Mi            
kube-flannel-ds-4zrg2               2m           33Mi            
kube-flannel-ds-l5thg               2m           24Mi            
kube-proxy-9gpj9                    1m           26Mi            
kube-proxy-jv67d                    1m           18Mi            
kube-proxy-k97jk                    1m           30Mi            
kube-scheduler-k8smaster            2m           36Mi            
metrics-server-5f55b696bd-tx7k7     1m           12Mi  

# 至此,metrics-server安装完成

2 准备deployment和servie

创建pc-hpa-pod.yaml文件,内容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: dev
spec:
  strategy: # 策略
    type: RollingUpdate # 滚动更新策略
  replicas: 1
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        resources: # 资源配额
          limits:  # 限制资源(上限)
            cpu: "1" # CPU限制,单位是core数
          requests: # 请求资源(下限)
            cpu: "100m"  # CPU限制,单位是core数
# 创建deploy
[root@k8smaster k8s]# kubectl create -f pc-hpa-pod.yaml 
deployment.apps/nginx created

# 创建service
[root@k8smaster k8s]# kubectl expose deployment nginx --type=NodePort --port=80 -n dev
service/nginx exposed

# 查看资源
[root@k8smaster k8s]# kubectl get deploy,pod,svc -n dev
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           58s

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7b849d6956-vwfl4   1/1     Running   0          58s

NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/nginx   NodePort   10.98.138.175   <none>        80:30865/TCP   6s

3 部署HPA
创建pc-hpa.yaml文件,内容如下:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: pc-hpa
  namespace: dev
spec:
  minReplicas: 1  #最小pod数量
  maxReplicas: 10 #最大pod数量
  targetCPUUtilizationPercentage: 3 # CPU使用率指标
  scaleTargetRef:   # 指定要控制的nginx信息
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
# 创建hpa
[root@k8smaster k8s]# kubectl create -f pc-hpa.yaml
horizontalpodautoscaler.autoscaling/pc-hpa created

# 查看hpa
[root@k8smaster k8s]# kubectl get hpa -n dev
NAME     REFERENCE          TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   <unknown>/3%   1         10        0          13s

4 测试
使用压测工具对service地址192.168.12.131:30865进行压测,然后通过控制台查看hpa和pod的变化
postman:

在这里插入图片描述


hpa变化

[root@k8smaster k8s]# kubectl get hpa -n dev -w
NAME     REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   0%/3%     1         10        1          2m36s
pc-hpa   Deployment/nginx   0%/3%     1         10        1          5m19s
pc-hpa   Deployment/nginx   23%/3%    1         10        1          7m5s
pc-hpa   Deployment/nginx   23%/3%    1         10        4          7m20s
pc-hpa   Deployment/nginx   23%/3%    1         10        8          7m35s
pc-hpa   Deployment/nginx   1%/3%     1         10        8          8m5s
pc-hpa   Deployment/nginx   0%/3%     1         10        8          9m6s
pc-hpa   Deployment/nginx   0%/3%     1         10        8          12m
pc-hpa   Deployment/nginx   0%/3%     1         10        2          13m
pc-hpa   Deployment/nginx   0%/3%     1         10        2          13m
pc-hpa   Deployment/nginx   0%/3%     1         10        1          14m

deployment变化

[root@k8smaster ~]# kubectl get deploy -n dev -w
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           8m29s
nginx   1/4     1            1           11m
nginx   1/4     1            1           11m
nginx   1/4     1            1           11m
nginx   1/4     4            1           11m
nginx   2/4     4            2           11m
nginx   3/4     4            3           11m
nginx   4/4     4            4           11m
nginx   4/8     4            4           11m
nginx   4/8     4            4           11m
nginx   4/8     4            4           11m
nginx   4/8     8            4           11m
nginx   5/8     8            5           11m
nginx   6/8     8            6           11m
nginx   7/8     8            7           11m
nginx   8/8     8            8           11m
nginx   8/2     8            8           17m
nginx   8/2     8            8           17m
nginx   2/2     2            2           17m
nginx   2/1     2            2           18m
nginx   2/1     2            2           18m
nginx   1/1     1            1           18m

pod变化

[root@k8smaster ~]# kubectl get pods -n dev
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7b849d6956-vwfl4   1/1     Running   0          6m57s
[root@k8smaster ~]# kubectl get pods -n dev -w
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7b849d6956-vwfl4   1/1     Running   0          7m
nginx-7b849d6956-mr8wf   0/1     Pending   0          0s
nginx-7b849d6956-hx48q   0/1     Pending   0          0s
nginx-7b849d6956-dtft8   0/1     Pending   0          0s
nginx-7b849d6956-mr8wf   0/1     Pending   0          0s
nginx-7b849d6956-hx48q   0/1     Pending   0          0s
nginx-7b849d6956-dtft8   0/1     Pending   0          0s
nginx-7b849d6956-mr8wf   0/1     ContainerCreating   0          0s
nginx-7b849d6956-hx48q   0/1     ContainerCreating   0          0s
nginx-7b849d6956-dtft8   0/1     ContainerCreating   0          0s
nginx-7b849d6956-hx48q   1/1     Running             0          3s
nginx-7b849d6956-mr8wf   1/1     Running             0          3s
nginx-7b849d6956-dtft8   1/1     Running             0          3s
nginx-7b849d6956-bk8wn   0/1     Pending             0          0s
nginx-7b849d6956-bk8wn   0/1     Pending             0          0s
nginx-7b849d6956-98rhq   0/1     Pending             0          0s
nginx-7b849d6956-npdm8   0/1     Pending             0          0s
nginx-7b849d6956-98rhq   0/1     Pending             0          0s
nginx-7b849d6956-npdm8   0/1     Pending             0          0s
nginx-7b849d6956-k6bcn   0/1     Pending             0          0s
nginx-7b849d6956-k6bcn   0/1     Pending             0          0s
nginx-7b849d6956-bk8wn   0/1     ContainerCreating   0          0s
nginx-7b849d6956-98rhq   0/1     ContainerCreating   0          0s
nginx-7b849d6956-npdm8   0/1     ContainerCreating   0          0s
nginx-7b849d6956-k6bcn   0/1     ContainerCreating   0          0s
nginx-7b849d6956-k6bcn   1/1     Running             0          2s
nginx-7b849d6956-98rhq   1/1     Running             0          2s
nginx-7b849d6956-bk8wn   1/1     Running             0          3s
nginx-7b849d6956-npdm8   1/1     Running             0          3s
nginx-7b849d6956-k6bcn   1/1     Terminating         0          5m33s
nginx-7b849d6956-dtft8   1/1     Terminating         0          5m48s
nginx-7b849d6956-npdm8   1/1     Terminating         0          5m33s
nginx-7b849d6956-bk8wn   1/1     Terminating         0          5m33s
nginx-7b849d6956-98rhq   1/1     Terminating         0          5m33s
nginx-7b849d6956-hx48q   1/1     Terminating         0          5m48s
nginx-7b849d6956-hx48q   0/1     Terminating         0          5m49s
nginx-7b849d6956-npdm8   0/1     Terminating         0          5m34s
nginx-7b849d6956-bk8wn   0/1     Terminating         0          5m34s
nginx-7b849d6956-hx48q   0/1     Terminating         0          5m50s
nginx-7b849d6956-hx48q   0/1     Terminating         0          5m50s
nginx-7b849d6956-npdm8   0/1     Terminating         0          5m35s
nginx-7b849d6956-npdm8   0/1     Terminating         0          5m35s
nginx-7b849d6956-bk8wn   0/1     Terminating         0          5m35s
nginx-7b849d6956-bk8wn   0/1     Terminating         0          5m35s
nginx-7b849d6956-dtft8   0/1     Terminating         0          5m50s
nginx-7b849d6956-k6bcn   0/1     Terminating         0          5m36s
nginx-7b849d6956-98rhq   0/1     Terminating         0          5m37s
nginx-7b849d6956-98rhq   0/1     Terminating         0          5m37s
nginx-7b849d6956-98rhq   0/1     Terminating         0          5m37s
nginx-7b849d6956-k6bcn   0/1     Terminating         0          5m38s
nginx-7b849d6956-k6bcn   0/1     Terminating         0          5m38s
nginx-7b849d6956-dtft8   0/1     Terminating         0          5m53s
nginx-7b849d6956-dtft8   0/1     Terminating         0          5m53s
nginx-7b849d6956-mr8wf   1/1     Terminating         0          6m49s
nginx-7b849d6956-mr8wf   0/1     Terminating         0          6m49s
nginx-7b849d6956-mr8wf   0/1     Terminating         0          7m1s
nginx-7b849d6956-mr8wf   0/1     Terminating         0          7m1s

2.5 DaemonSet(DS)

DaemonSet类型的控制器可以保证在集群中的每一台(或指定)节点上都运行一个副本。一般适用于日志收集、节点监控等场景。也就是说,如果一个Pod提供的功能是节点级别的(每个节点都需要且只需要一个),那么这类Pod就适合使用DaemonSet类型的控制器创建。

DaemonSet控制器的特点:

  • 每当向集群中添加一个节点时,指定的 Pod 副本也将添加到该节点上
  • 当节点从集群中移除时,Pod 也就被垃圾回收了

下面先来看下DaemonSet的资源清单文件

apiVersion: apps/v1 # 版本号
kind: DaemonSet # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: daemonset
spec: # 详情描述
  revisionHistoryLimit: 3 # 保留历史版本
  updateStrategy: # 更新策略
    type: RollingUpdate # 滚动更新策略
    rollingUpdate: # 滚动更新
      maxUnavailable: 1 # 最大不可用状态的 Pod 的最大值,可以为百分比,也可以为整数
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

创建pc-daemonset.yaml,内容如下:

apiVersion: apps/v1
kind: DaemonSet      
metadata:
  name: pc-daemonset
  namespace: dev
spec: 
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
# 创建daemonset
[root@k8smaster k8s]# kubectl create -f pc-daemonset.yaml 
daemonset.apps/pc-daemonset created

# 查看daemonset,只有一个???k8snode2有污点,禁止调度,
# 查看污点   kubectl describe node k8snode2
# 去除污点   kubectl taint nodes k8snode2 tag:NoExecute-
[root@k8smaster k8s]# kubectl get ds -n dev -o wide
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE    CONTAINERS   IMAGES         SELECTOR
pc-daemonset   2         2         2       2            2           <none>          5m5s   nginx        nginx:1.17.1   app=nginx-pod

# 查看pod,发现在每个Node上都运行一个pod
[root@k8smaster k8s]# kubectl get pods -n dev -o wide
NAME                 READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
pc-daemonset-bk9zt   1/1     Running   0          117s    10.244.1.33   k8snode2   <none>           <none>
pc-daemonset-tt5k2   1/1     Running   0          5m24s   10.244.2.82   k8snode1   <none>           <none>
    
# 删除daemonset
[root@k8smaster k8s]# kubectl delete -f pc-daemonset.yaml
daemonset.apps "pc-daemonset" deleted

2.6 Job

Job,主要用于负责**批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)**任务。Job特点如下:

  • 当Job创建的pod执行成功结束时,Job将记录成功结束的pod数量
  • 当成功结束的pod达到指定的数量时,Job将完成执行


Job的资源清单文件:

apiVersion: batch/v1 # 版本号
kind: Job # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: job
spec: # 详情描述
  completions: 1 # 指定job需要成功运行Pods的次数。默认值: 1
  parallelism: 1 # 指定job在任一时刻应该并发运行Pods的数量。默认值: 1
  activeDeadlineSeconds: 30 # 指定job可运行的时间期限,超过时间还未结束,系统将会尝试进行终止。
  backoffLimit: 6 # 指定job失败后进行重试的次数。默认是6
  manualSelector: true # 是否可以使用selector选择器选择pod,默认是false
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: counter-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [counter-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: counter-pod
    spec:
      restartPolicy: Never # 重启策略只能设置为Never或者OnFailure
      containers:
      - name: counter
        image: busybox:1.30
        command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 2;done"]

关于重启策略设置的说明:
如果指定为OnFailure,则job会在pod出现故障时重启容器,而不是创建pod,failed次数不变
如果指定为Never,则job会在pod出现故障时创建新的pod,并且故障pod不会消失,也不会重启,failed次数加1
如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了,当然不对,所以不能设置为Always

创建pc-job.yaml,内容如下:

apiVersion: batch/v1
kind: Job      
metadata:
  name: pc-job
  namespace: dev
spec:
  manualSelector: true
  selector:
    matchLabels:
      app: counter-pod
  template:
    metadata:
      labels:
        app: counter-pod
    spec:
      restartPolicy: Never
      containers:
      - name: counter
        image: busybox:1.30
        command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3;done"]
# 创建job
[root@k8smaster k8s]# kubectl create -f pc-job.yaml 
job.batch/pc-job created

# 查看job
[root@k8smaster k8s]# kubectl get job -n dev -w
NAME     COMPLETIONS   DURATION   AGE
pc-job   0/1                      0s
pc-job   0/1           0s         0s
pc-job   1/1           29s        29s

# 通过观察pod状态可以看到,pod在运行完毕任务后,就会变成Completed状态
[root@k8smaster ~]# kubectl get pods -n dev -w
NAME           READY   STATUS    RESTARTS   AGE
pc-job-bktcv   0/1     Pending   0          0s
pc-job-bktcv   0/1     Pending   0          0s
pc-job-bktcv   0/1     ContainerCreating   0          0s
pc-job-bktcv   1/1     Running             0          2s
pc-job-bktcv   0/1     Completed           0          29s


# 接下来,调整下pod运行的总数量和并行数量 即:在spec下设置下面两个选项
#  completions: 6 # 指定job需要成功运行Pods的次数为6
#  parallelism: 3 # 指定job并发运行Pods的数量为3
#  然后重新运行job,观察效果,此时会发现,job会每次运行3个pod,总共执行了6个pod
[root@k8smaster ~]# kubectl get pods -n dev -w
NAME           READY   STATUS    RESTARTS   AGE
pc-job-hthqp   0/1     Pending   0          0s
pc-job-dqw99   0/1     Pending   0          0s
pc-job-hthqp   0/1     Pending   0          0s
pc-job-cpfqt   0/1     Pending   0          0s
pc-job-dqw99   0/1     Pending   0          0s
pc-job-cpfqt   0/1     Pending   0          0s
pc-job-dqw99   0/1     ContainerCreating   0          0s
pc-job-hthqp   0/1     ContainerCreating   0          0s
pc-job-cpfqt   0/1     ContainerCreating   0          0s
pc-job-dqw99   1/1     Running             0          2s
pc-job-cpfqt   1/1     Running             0          2s
pc-job-hthqp   1/1     Running             0          2s
pc-job-dqw99   0/1     Completed           0          28s
pc-job-sgvgf   0/1     Pending             0          0s
pc-job-sgvgf   0/1     Pending             0          0s
pc-job-sgvgf   0/1     ContainerCreating   0          0s
pc-job-sgvgf   1/1     Running             0          1s
pc-job-cpfqt   0/1     Completed           0          29s
pc-job-wrxvr   0/1     Pending             0          0s
pc-job-wrxvr   0/1     Pending             0          0s
pc-job-hthqp   0/1     Completed           0          29s
pc-job-zrx2g   0/1     Pending             0          0s
pc-job-zrx2g   0/1     Pending             0          0s
pc-job-wrxvr   0/1     ContainerCreating   0          1s
pc-job-zrx2g   0/1     ContainerCreating   0          1s
pc-job-zrx2g   1/1     Running             0          1s
pc-job-wrxvr   1/1     Running             0          1s
pc-job-sgvgf   0/1     Completed           0          28s
pc-job-zrx2g   0/1     Completed           0          29s
pc-job-wrxvr   0/1     Completed           0          29s

# 删除job
[root@k8smaster k8s]# kubectl delete -f pc-job.yaml 
job.batch "pc-job" deleted

2.7 CronJob(CJ)

CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,Job控制器定义的作业任务在其控制器资源创建之后便会立即执行,但CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点重复运行的方式。也就是说,CronJob可以在特定的时间点(反复的)去运行job任务

CronJob的资源清单文件:

apiVersion: batch/v1beta1 # 版本号
kind: CronJob # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: cronjob
spec: # 详情描述
  schedule: # cron格式的作业调度运行时间点,用于控制任务在什么时间执行
  concurrencyPolicy: # 并发执行策略,用于定义前一次作业运行尚未完成时是否以及如何运行后一次的作业
  failedJobHistoryLimit: # 为失败的任务执行保留的历史记录数,默认为1
  successfulJobHistoryLimit: # 为成功的任务执行保留的历史记录数,默认为3
  startingDeadlineSeconds: # 启动作业错误的超时时长
  jobTemplate: # job控制器模板,用于为cronjob控制器生成job对象;下面其实就是job的定义
    metadata:
    spec:
      completions: 1
      parallelism: 1
      activeDeadlineSeconds: 30
      backoffLimit: 6
      manualSelector: true
      selector:
        matchLabels:
          app: counter-pod
        matchExpressions: 规则
          - {key: app, operator: In, values: [counter-pod]}
      template:
        metadata:
          labels:
            app: counter-pod
        spec:
          restartPolicy: Never 
          containers:
          - name: counter
            image: busybox:1.30
            command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 20;done"]
需要重点解释的几个选项:
schedule: cron表达式,用于指定任务的执行时间
    */1    *      *    *     *
    <分钟> <小时> <> <月份> <星期>

    分钟 值从 059.
    小时 值从 023.
    日 值从 131.
    月 值从 112.
    星期 值从 06, 0 代表星期日
    多个时间可以用逗号隔开; 范围可以用连字符给出;*可以作为通配符; /表示每...
concurrencyPolicy:
    Allow:   允许Jobs并发运行(默认)
    Forbid:  禁止并发运行,如果上一次运行尚未完成,则跳过下一次运行
    Replace: 替换,取消当前正在运行的作业并用新作业替换它

创建pc-cronjob.yaml,内容如下:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: pc-cronjob
  namespace: dev
  labels:
    controller: cronjob
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    metadata:
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
          - name: counter
            image: busybox:1.30
            command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3;done"]
# 创建cronjob
[root@k8smaster k8s]# kubectl create -f pc-cronjob.yaml 
cronjob.batch/pc-cronjob created

# 查看cronjob
[root@k8smaster k8s]# kubectl get cronjobs -n dev -w
NAME         SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pc-cronjob   */1 * * * *   False     0        <none>          0s
pc-cronjob   */1 * * * *   False     1        6s              24s
pc-cronjob   */1 * * * *   False     0        36s             54s
pc-cronjob   */1 * * * *   False     1        6s              84s
pc-cronjob   */1 * * * *   False     0        36s             114s


# 查看job
[root@k8smaster ~]# kubectl get jobs -n dev -w
NAME                    COMPLETIONS   DURATION   AGE
pc-cronjob-1647156480   0/1                      0s
pc-cronjob-1647156480   0/1           0s         0s
pc-cronjob-1647156480   1/1           28s        28s
pc-cronjob-1647156540   0/1                      0s
pc-cronjob-1647156540   0/1           0s         0s
pc-cronjob-1647156540   1/1           29s        29s


# 查看pod
[root@k8smaster ~]# kubectl get pods -n dev -w
NAME                          READY   STATUS    RESTARTS   AGE
pc-cronjob-1647156480-kz8gf   0/1     Pending   0          0s
pc-cronjob-1647156480-kz8gf   0/1     Pending   0          0s
pc-cronjob-1647156480-kz8gf   0/1     ContainerCreating   0          0s
pc-cronjob-1647156480-kz8gf   1/1     Running             0          1s
pc-cronjob-1647156480-kz8gf   0/1     Completed           0          28s
pc-cronjob-1647156540-hp8zh   0/1     Pending             0          0s
pc-cronjob-1647156540-hp8zh   0/1     Pending             0          0s
pc-cronjob-1647156540-hp8zh   0/1     ContainerCreating   0          0s
pc-cronjob-1647156540-hp8zh   1/1     Running             0          1s
pc-cronjob-1647156540-hp8zh   0/1     Completed           0          29s
pc-cronjob-1647156600-gll2t   0/1     Pending             0          0s
pc-cronjob-1647156600-gll2t   0/1     Pending             0          0s
pc-cronjob-1647156600-gll2t   0/1     ContainerCreating   0          0s
pc-cronjob-1647156600-gll2t   1/1     Running             0          2s

# 删除cronjob
[root@k8smaster k8s]# kubectl  delete -f pc-cronjob.yaml
cronjob.batch "pc-cronjob" deleted

2.8 StatefulSets

参考:https://kubernetes.io/zh/docs/concepts/workloads/controllers/statefulset/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值