K8S从入门到放弃-第六章 pod控制器详解


本章节主要介绍各种Pod控制器的详细使用。

6.1 pod控制器介绍

在kubernetes中,按照pod的创建方式可以将其分为两类:
●自主式pod: kubernetes直接创建出来的pod,这种pod删除后就没有了,也不会重建
●控制器创建的pod:通过控制器创建的pod,这种pod删除了之后还会自动重建

什么是Pod控制器
Pod控制器是管理pod的中间层,使用了pod控制器之后,我们只需要告诉pod控制器,想要多少个什么样
的pod就可以了,它就会创建出满足条件的pod并确保每一个pod处于用户期望的状态, 如果pod在运行中出
现故障,控制器会基于指定策略重启动或者重建pod。

在kubernetes中,有很多类型的pod控制器,每种都有自己的适合的场景,常见的有下面这些:
●ReplicationController: 比较原始的pod控制器,已经被废弃,由ReplicaSet替代
●ReplicaSet: 保证指定数量的pod运行,并支持pod数量变更,镜像版本变更
●Deployment: 通过控制ReplicaSet来控制pod, 并支持滚动升级、版本回退

●Horizontal Pod Autoscaler:可以根据集群负载自动调整Pod的数量,实现削峰填谷
●DaemonSet:在集群中的指定Node.上都运行一个副本,-般用于守护进程类的任务
●Job: 它创建出来的pod只要完成任务就立即退出,用于执行- -次性任务
●Cronjob: 它创建的pod会周期性的执行,用于执行周期性任务
●StatefulSet: 管理有状态应用

6.2 ReplicaSet(RS)

ReplicaSet的主要作用是保证一定数量的pod能够正常运行,它会持续监听这些pod的运行状态,一旦pod发生
故障,就会重启或重建。同时它还支持对pod数量的扩缩容和版本镜像的升级。
在这里插入图片描述
ReplicaSet的资源清单文件:

apiVersion: apps/v1  #版本号
kind: ReplicaSet #类型
metadata:  #元数据
  name: # rs名称
  namespace: #所属命名空间
  labels: #标签
    controller: rs
spec: #详情描述
  replicas: 3 #副本数量
  selector: #选择器,通过它指定该控制器管理哪些pod
    matchLabels: # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: #模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

在这里面,需要新了解的配置项就是spec下面几个选项:
●replicas: 指定副本数量,其实就是当前rs创建出来的pod的数量,默认为1
●selector: 选择器,它的作用是建立pod控制器和pod之间的关联关系,采用的Label Selector机制
在pod模板上定义label,在控制器上定义选择器,就可以表明当前控制器能管理哪些pod了
●template: 模板,就是当前控制器创建pod所使用的模板板,里面其实就是前一章学过的pod的定义
创建ReplicaSet
创建pc-replicaset.yaml文件,内容如下:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: pc-replicaset
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
#创建rs
[root@master ~]# kubectl create -f pc-replicaset.yaml 
replicaset.apps/pc-replicaset created

#查看rs
# DESIRED :期望副本数量
# CURRENT :当前副本数量
# READY :已经准备好提供服务的副本数量
[root@master ~]# kubectl get rs pc-replicaset -n dev -o wide 
NAME            DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
pc-replicaset   3         3         3       4m45s   nginx        nginx:1.17.1   app=nginx-pod

#查看当前控制器创建出来的pod
#这里发现控制器创建出来的pod的名称是在控制器名称后面拼接了- xxxx随机码
[root@master ~]# kubectl get pods -n dev
NAME                      READY   STATUS    RESTARTS   AGE
pc-replicaset-9szxk       1/1     Running   0          5m45s
pc-replicaset-bkp9c       1/1     Running   0          5m45s
pc-replicaset-fhk46       1/1     Running   0          5m45s

扩缩容

#编辑rs的副本数量,修改spec:replicas: 6即可
[root@master ~]# kubectl edit rs pc-replicaset -n dev
replicaset.apps/pc-replicaset edited
[root@master ~]# kubectl get rs pc-replicaset -n dev -o wide 
NAME            DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
pc-replicaset   6         6         6       9m45s   nginx        nginx:1.17.1   app=nginx-pod

#当然也可以直接使用命令实现
#使用scale命令实现扩缩容,后面--replicas=n直 接指定目标数量即可
[root@master ~]# kubectl scale rs pc-replicaset --replicas=2 -n dev
replicaset.apps/pc-replicaset scaled
[root@master ~]# kubectl get rs pc-replicaset -n dev -o wide 
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
pc-replicaset   2         2         2       11m   nginx        nginx:1.17.1   app=nginx-pod

[root@master ~]# kubectl get pods -n dev
NAME                      READY   STATUS    RESTARTS   AGE
pc-replicaset-bkp9c       1/1     Running   0          12m
pc-replicaset-fhk46       1/1     Running   0          12m

镜像升级

#编辑rs的容器镜像 - image: nginx:1.17.2
[root@master ~]# kubectl edit rs pc-replicaset -n dev
replicaset.apps/pc-replicaset edited
[root@master ~]# kubectl get rs pc-replicaset -n dev -o wide 
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
pc-replicaset   2         2         2       16m   nginx        nginx:1.17.2   app=nginx-pod

#同样的道理,也可以使用命令完成这个工作
# kubectl set image rs rs名称容器=镜像版本-n namespace
[root@master ~]# kubectl set image rs pc-replicaset nginx=nginx:1.17.1 -n dev
replicaset.apps/pc-replicaset image updated
[root@master ~]# kubectl get rs pc-replicaset -n dev -o wide 
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
pc-replicaset   2         2         2       18m   nginx        nginx:1.17.1   app=nginx-pod

删除ReplicaSet

#使用kubectl delete命令会删除此RS以及它管理的Pod
#在kubernetes删除RS前,会将RS的replicasclear调整为0,等待所有的Pod被删除后,在执行RS对象的删除
[root@master ~]# kubectl delete rs pc-replicaset -n dev
replicaset.apps "pc-replicaset" deleted
[root@master ~]# kubectl get rs pc-replicaset -n dev
Error from server (NotFound): replicasets.apps "pc-replicaset" not found

#如果希望仅仅删除RS对象(保留Pod),可以使用kubectl delete命 令时添加--cascade=false选项(不推荐)。
[root@master ~]# kubectl delete rs pc-replicaset -n dev --cascade=false
replicaset.apps "pc-replicaset" deleted
[ root@master ~]# kubectl get pods -n dev

#也可以使用yam1直接删除(推荐)
[root@master ~]# kubectl delete -f pc.replicaset.yaml 
replicaset.apps "pc-replicaset" deleted

5.3 Deployment(Deploy)

为了更好的解决服务编排的问题,kubernetes在V1.2版本开始, 引入了Deployment控制器。 值得一提的是,
这种控制器并不直接管理pod,而是通过管理ReplicaSet来间接管理Pod, 即: Deployment管理ReplicaSet,
ReplicaSet管理Pod。所以Deployment比ReplicaSet功能更加强大。
在这里插入图片描述
Deployment主要功能有下面几个:
●支持ReplicaSet的所有功能
●支持发布的停止、继续
●支持版本滚动升级和版本回退
Deployment的资源清单文件:

apiVersion: apps/v1 #版本号
kind: Deployment #类型
metadata: #元数据
  name: # rs名称
  namespace: #所属命名空间
  labels: #标签
    controller: deploy
spec: #详情描述
  replicas: 3 #副本数量
  revisionHistoryLimit: 3 #保留历史版本,默认是10
  paused: false #暂停部署,默认是false
  progressDeadlineSeconds: 600 #部署超时时间(s),默认是600
  strategy: #策略
    type: RollingUpdate #滚动更新策略
    rollingUpdate: #滚动更新
      maxSurge: 30% #最大额外可以存在的副本数,可以为百分比,也可以为整数
      maxUnavailable: 30% #最大不可用状态的Pod的最大值,可以为百分比,也可以为整数
  selector: #选择器,通过它指定该控制器管理哪些pod
    matchLabels:# Label s匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: #模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

创建deployment
创建pc-deployment.yaml,内容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
#创建deployment
# --record=true
记录每次的版本变化
[root@master ~]# kubectl create -f pc-deployment.yaml --record=true
deployment.apps/pc-deployment created

#查看deployment
# UP-TO-DATE最新版本的pod的数量
# AVAILABLE当前可用的pod的数量
[root@master ~]# kubectl get deploy -n dev -o wide
NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
pc-deployment   3/3     3            3           90m   nginx        nginx:1.17.1   app=nginx-pod

#查看rs
#发现rs的名称是在原来deployment的名字后面添加了一个10位数的随机串
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5d89bdfbf9   3         3         3       2m12s

#查看pod
[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5d89bdfbf9-66dv8   1/1     Running   0          2m20s
pc-deployment-5d89bdfbf9-x6vn7   1/1     Running   0          2m20s
pc-deployment-5d89bdfbf9-xvrxz   1/1     Running   0          2m20s

扩缩容

#变更副本数量为5个
[root@master ~]# kubectl scale deploy pc-deployment --replicas=5 -n dev
deployment.apps/pc-deployment scaled

#查看deployment
[root@master ~]# kubectl get deploy pc-deployment -n dev
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
pc-deployment   5/5     5            5           9m51s

#查看pod
[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5d89bdfbf9-5dp4g   1/1     Running   0          31s
pc-deployment-5d89bdfbf9-66dv8   1/1     Running   0          6m24s
pc-deployment-5d89bdfbf9-777zl   1/1     Running   0          31s
pc-deployment-5d89bdfbf9-x6vn7   1/1     Running   0          6m24s
pc-deployment-5d89bdfbf9-xvrxz   1/1     Running   0          6m24s

#用edit编辑模式编辑deployment的副本数量,修改spec:replicas:3即可
[root@master ~]# kubectl edit deploy pc-deployment -n dev
Edit cancelled, no changes made.

#查看pod
[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS        RESTARTS   AGE
pc-deployment-5d89bdfbf9-66dv8   1/1     Running       0          7m46s
pc-deployment-5d89bdfbf9-777zl   0/1     Terminating   0          113s
pc-deployment-5d89bdfbf9-x6vn7   1/1     Running       0          7m46s
pc-deployment-5d89bdfbf9-xvrxz   1/1     Running       0          7m46s

镜像更新
Deployment支持两种镜像更新的策略:重建更新和滚动更新(默认),可以通过strategy选项进行配置。

strategy:指定新的Pod替换旧的Pod的策略,支持两个属性:
type:指定策略类型,支持两种策略
Recreate:在创建出新的Pod之前会先杀掉所有已存在的Pod
RollingUpdate:滚动更新,就是杀死一部分, 就启动一部分,在更新过程中,存在两个版本Pod
rollingUpdate:当type为RollingUpdate时生效,用于为RollingUpdate设置参数, 支持两个属性:
maxUnavailable:用来指定在升级过程中不可用Pod的最大数量,默认为25%。
maxSurge:用来指定在升级过程中 可以超过期望的Pod的最大数量,默认为25%。

重建更新
1)编辑pc-deployment.yaml,在spec节点下添加更新策略

spec:
  strategy: #策略
  type: Recreate #重建更新策略
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment
  namespace: dev
spec:
  strategy: #策略
    type: Recreate #重建更新策略
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1

2)创建deploy进行验证

[root@master ~]# kubectl apply -f pc-deployment.yaml 
deployment.apps/pc-deployment created


#新开一个窗口去监视pod的状态
[root@master ~]# kubectl get pods -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5d89bdfbf9-7bfz8   1/1     Running   0          38s
pc-deployment-5d89bdfbf9-f4bp2   1/1     Running   0          38s
pc-deployment-5d89bdfbf9-kww9m   1/1     Running   0          38s


pc-deployment-5d89bdfbf9-7bfz8   1/1     Terminating   0          2m21s
pc-deployment-5d89bdfbf9-f4bp2   1/1     Terminating   0          2m21s
pc-deployment-5d89bdfbf9-kww9m   1/1     Terminating   0          2m21s
pc-deployment-5d89bdfbf9-kww9m   0/1     Terminating   0          2m23s
pc-deployment-5d89bdfbf9-7bfz8   0/1     Terminating   0          2m23s
pc-deployment-5d89bdfbf9-kww9m   0/1     Terminating   0          2m24s
pc-deployment-5d89bdfbf9-kww9m   0/1     Terminating   0          2m24s
pc-deployment-5d89bdfbf9-kww9m   0/1     Terminating   0          2m24s
pc-deployment-5d89bdfbf9-7bfz8   0/1     Terminating   0          2m24s
pc-deployment-5d89bdfbf9-7bfz8   0/1     Terminating   0          2m24s
pc-deployment-5d89bdfbf9-7bfz8   0/1     Terminating   0          2m24s
pc-deployment-5d89bdfbf9-f4bp2   0/1     Terminating   0          2m24s
pc-deployment-5d89bdfbf9-f4bp2   0/1     Terminating   0          2m30s
pc-deployment-5d89bdfbf9-f4bp2   0/1     Terminating   0          2m30s
pc-deployment-675d469f8b-x9dqc   0/1     Pending       0          0s
pc-deployment-675d469f8b-42xf9   0/1     Pending       0          0s
pc-deployment-675d469f8b-ndqcw   0/1     Pending       0          0s
pc-deployment-675d469f8b-x9dqc   0/1     Pending       0          0s
pc-deployment-675d469f8b-42xf9   0/1     Pending       0          0s
pc-deployment-675d469f8b-ndqcw   0/1     Pending       0          0s
pc-deployment-675d469f8b-x9dqc   0/1     ContainerCreating   0          0s
pc-deployment-675d469f8b-42xf9   0/1     ContainerCreating   0          0s
pc-deployment-675d469f8b-ndqcw   0/1     ContainerCreating   0          0s
pc-deployment-675d469f8b-x9dqc   1/1     Running             0          37s
pc-deployment-675d469f8b-42xf9   1/1     Running             0          58s
pc-deployment-675d469f8b-ndqcw   1/1     Running             0          73s

#在原来窗口更新镜像
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev
deployment.apps/pc-deployment image updated

滚动更新
1)编辑pc-deployment.yaml,在spec节点下添加更新策略

strategy: #策略
  type: RollingUpdate #滚动更新策略
  rollingUpdate:
    maxUnavailable: 25%
    maxSurge: 25% 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment
  namespace: dev
spec:
  strategy: #策略
    type: RollingUpdate #滚动更新策略
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25% 
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1

2)创建deploy进行验证

[root@master ~]# kubectl apply -f pc-deployment.yaml 
deployment.apps/pc-deployment unchanged


#开一个新窗口
[root@master ~]# kubectl get pods -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5d89bdfbf9-9vhgf   1/1     Running   0          59s
pc-deployment-5d89bdfbf9-hmtml   1/1     Running   0          61s
pc-deployment-5d89bdfbf9-hmvpz   1/1     Running   0          56s
pc-deployment-7865c58bdf-hj9tv   0/1     Pending   0          0s
pc-deployment-7865c58bdf-hj9tv   0/1     Pending   0          0s
pc-deployment-7865c58bdf-hj9tv   0/1     ContainerCreating   0          0s
pc-deployment-7865c58bdf-hj9tv   1/1     Running             0          41s
pc-deployment-5d89bdfbf9-hmvpz   1/1     Terminating         0          2m20s
pc-deployment-7865c58bdf-fs4rs   0/1     Pending             0          0s
pc-deployment-7865c58bdf-fs4rs   0/1     Pending             0          0s
pc-deployment-7865c58bdf-fs4rs   0/1     ContainerCreating   0          0s
pc-deployment-5d89bdfbf9-hmvpz   0/1     Terminating         0          2m23s
pc-deployment-5d89bdfbf9-hmvpz   0/1     Terminating         0          2m25s
pc-deployment-5d89bdfbf9-hmvpz   0/1     Terminating         0          2m25s
pc-deployment-7865c58bdf-fs4rs   1/1     Running             0          5s
pc-deployment-5d89bdfbf9-9vhgf   1/1     Terminating         0          2m28s
pc-deployment-7865c58bdf-hjbmm   0/1     Pending             0          0s
pc-deployment-7865c58bdf-hjbmm   0/1     Pending             0          0s
pc-deployment-7865c58bdf-hjbmm   0/1     ContainerCreating   0          0s
pc-deployment-7865c58bdf-hjbmm   1/1     Running             0          3s
pc-deployment-5d89bdfbf9-9vhgf   0/1     Terminating         0          2m31s
pc-deployment-5d89bdfbf9-hmtml   1/1     Terminating         0          2m33s
pc-deployment-5d89bdfbf9-9vhgf   0/1     Terminating         0          2m31s
pc-deployment-5d89bdfbf9-hmtml   0/1     Terminating         0          2m35s
pc-deployment-5d89bdfbf9-hmtml   0/1     Terminating         0          2m36s
pc-deployment-5d89bdfbf9-hmtml   0/1     Terminating         0          2m36s
pc-deployment-5d89bdfbf9-9vhgf   0/1     Terminating         0          2m39s
pc-deployment-5d89bdfbf9-9vhgf   0/1     Terminating         0          2m39s


#在原来窗口更新一下镜像
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.3 -n dev
deployment.apps/pc-deployment image updated

滚动升级过程
在这里插入图片描述
镜像更新中rs的变化:

#查看rs,发现原来的rs的依旧存在,只是pod数量变为了0,而后又新产生了一一个rs, pod数量为4
#其实这就是deployment能够进行版本回退的奥妙所在,后面会详细解释
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5d89bdfbf9   0         0         0       17m
pc-deployment-675d469f8b   0         0         0       14m
pc-deployment-7865c58bdf   3         3         3       7m45s

[root@master ~]# kubectl delete -f pc-deployment.yaml 
deployment.apps "pc-deployment" deleted
[root@master ~]# kubectl create -f pc-deployment.yaml --record
deployment.apps/pc-deployment created
[root@master ~]# kubectl get deploy,rs,pod -n dev
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pc-deployment   3/3     3            3           65s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/pc-deployment-5d89bdfbf9   3         3         3       65s

NAME                                 READY   STATUS    RESTARTS   AGE
pod/pc-deployment-5d89bdfbf9-6rdgg   1/1     Running   0          65s
pod/pc-deployment-5d89bdfbf9-mpdh7   1/1     Running   0          65s
pod/pc-deployment-5d89bdfbf9-vhkhj   1/1     Running   0          65s

为了查看明显效果,在开启两个master窗口

#执行回退命令
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev
deployment.apps/pc-deployment image updated
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5d89bdfbf9   0         0         0       7m46s
pc-deployment-675d469f8b   3         3         3       85s

#监控rs的实时变化
[root@master ~]# kubectl get rs -n dev -w
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5d89bdfbf9   3         3         3       4m47s
pc-deployment-675d469f8b   1         0         0       0s
pc-deployment-675d469f8b   1         0         0       0s
pc-deployment-675d469f8b   1         1         0       0s
pc-deployment-675d469f8b   1         1         1       2s
pc-deployment-5d89bdfbf9   2         3         3       6m23s
pc-deployment-5d89bdfbf9   2         3         3       6m23s
pc-deployment-5d89bdfbf9   2         2         2       6m23s
pc-deployment-675d469f8b   2         1         1       2s
pc-deployment-675d469f8b   2         1         1       2s
pc-deployment-675d469f8b   2         2         1       2s
pc-deployment-675d469f8b   2         2         2       5s
pc-deployment-5d89bdfbf9   1         2         2       6m26s
pc-deployment-675d469f8b   3         2         2       5s
pc-deployment-5d89bdfbf9   1         2         2       6m26s
pc-deployment-675d469f8b   3         2         2       5s
pc-deployment-5d89bdfbf9   1         1         1       6m26s
pc-deployment-675d469f8b   3         3         2       5s
pc-deployment-675d469f8b   3         3         3       8s
pc-deployment-5d89bdfbf9   0         1         1       6m29s
pc-deployment-5d89bdfbf9   0         1         1       6m29s
pc-deployment-5d89bdfbf9   0         0         0       6m29s

#监控pod的实时变化
[root@master ~]# kubectl get pods -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5d89bdfbf9-6rdgg   1/1     Running   0          4m58s
pc-deployment-5d89bdfbf9-mpdh7   1/1     Running   0          4m58s
pc-deployment-5d89bdfbf9-vhkhj   1/1     Running   0          4m58s
pc-deployment-675d469f8b-nfrqs   0/1     Pending   0          0s
pc-deployment-675d469f8b-nfrqs   0/1     Pending   0          0s
pc-deployment-675d469f8b-nfrqs   0/1     ContainerCreating   0          0s
pc-deployment-675d469f8b-nfrqs   1/1     Running             0          2s
pc-deployment-5d89bdfbf9-6rdgg   1/1     Terminating         0          6m23s
pc-deployment-675d469f8b-jrmhc   0/1     Pending             0          0s
pc-deployment-675d469f8b-jrmhc   0/1     Pending             0          0s
pc-deployment-675d469f8b-jrmhc   0/1     ContainerCreating   0          0s
pc-deployment-5d89bdfbf9-6rdgg   0/1     Terminating         0          6m26s
pc-deployment-675d469f8b-jrmhc   1/1     Running             0          3s
pc-deployment-5d89bdfbf9-vhkhj   1/1     Terminating         0          6m26s
pc-deployment-675d469f8b-xzs4c   0/1     Pending             0          0s
pc-deployment-675d469f8b-xzs4c   0/1     Pending             0          0s
pc-deployment-675d469f8b-xzs4c   0/1     ContainerCreating   0          0s
pc-deployment-5d89bdfbf9-6rdgg   0/1     Terminating         0          6m28s
pc-deployment-5d89bdfbf9-6rdgg   0/1     Terminating         0          6m28s
pc-deployment-675d469f8b-xzs4c   1/1     Running             0          3s
pc-deployment-5d89bdfbf9-vhkhj   0/1     Terminating         0          6m29s
pc-deployment-5d89bdfbf9-mpdh7   1/1     Terminating         0          6m29s
pc-deployment-5d89bdfbf9-vhkhj   0/1     Terminating         0          6m30s
pc-deployment-5d89bdfbf9-vhkhj   0/1     Terminating         0          6m30s
pc-deployment-5d89bdfbf9-mpdh7   0/1     Terminating         0          6m31s
pc-deployment-5d89bdfbf9-mpdh7   0/1     Terminating         0          6m32s
pc-deployment-5d89bdfbf9-mpdh7   0/1     Terminating         0          6m32s

版本回退
deployment支持版本升级过程中的暂停、继续功能以及版本回退等诸多功能,下面具体来看.
kubectl rollout:版本升级相关功能,支持下面的选项:
●status 显示当前升级状态
●history 显示升级历史记录
●pause 暂停版本升级过程
●resume 继续已经暂停的版本升级过程
●restart 重启版本升级过程
●undo 回滚到上一级版本(可以使用–to-revision回滚到指定版本)

#查看当前升级版本的状态
[root@master ~]# kubectl rollout status deploy pc-deployment -n dev
deployment "pc-deployment" successfully rolled out

#查看升级历史记录
[root@master ~]# kubectl rollout history deploy pc-deployment -n dev
deployment.apps/pc-deployment 
REVISION  CHANGE-CAUSE
1         kubectl create --filename=pc-deployment.yaml --record=true
2         kubectl create --filename=pc-deployment.yaml --record=true
#查看有两次历史记录,说明有一次升级


#查看当前版本号
[root@master ~]# kubectl get deployment -o wide -n dev
NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
pc-deployment   3/3     3            3           24m   nginx        nginx:1.17.3   app=nginx-pod
[root@master ~]# kubectl get deployment,rs  -n dev
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pc-deployment   3/3     3            3           25m

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/pc-deployment-5d89bdfbf9   0         0         0       25m
replicaset.apps/pc-deployment-675d469f8b   0         0         0       19m
replicaset.apps/pc-deployment-7865c58bdf   3         3         3       4m19s

#版本回滚
#这里直接使用--to-revision=1回滚到了1版本,如果省略这个选项,就是回退到上个版本,就是2版本
[root@master ~]# kubectl rollout undo deployment pc-deployment --to-revision=1 -n dev
deployment.apps/pc-deployment rolled back
[root@master ~]# kubectl get deployment  -n dev -o wide
NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
pc-deployment   3/3     3            3           29m   nginx        nginx:1.17.1   app=nginx-pod
[root@master ~]# kubectl get deployment,rs  -n dev
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pc-deployment   3/3     3            3           30m
NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/pc-deployment-5d89bdfbf9   3         3         3       30m
replicaset.apps/pc-deployment-675d469f8b   0         0         0       23m
replicaset.apps/pc-deployment-7865c58bdf   0         0         0       9m1s

#查看历史记录
[root@master ~]# kubectl rollout history deploy pc-deployment -n dev
deployment.apps/pc-deployment 
REVISION  CHANGE-CAUSE
2         kubectl create --filename=pc-deployment.yaml --record=true
3         kubectl create --filename=pc-deployment.yaml --record=true
4         kubectl create --filename=pc-deployment.yaml --record=true

#查看rs,发现第一个rs中有3个pod运行,后面两个版本的rs中pod为运行
#其实deployment之所以可是实现版本的回滚,就是通过记录下历史rs来实现的,
#一旦想回滚到哪个版本,只需要将当前版本pod数量降为0,然后将回滚版本的pod提升为目标数量就可以了
[root@master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5d89bdfbf9   3         3         3       31m
pc-deployment-675d469f8b   0         0         0       25m
pc-deployment-7865c58bdf   0         0         0       10m

金丝雀发布
Deployment支持更新过程中的控制,如”暂停(pause)"或"继续(resume)"更新操作。
比如有一批新的Pod资源创建完成后立即暂停更新过程, 此时,仅存在一部分新版本的应用, 主体部分还是旧
的版本。然后,再筛选一小部分的用户请求路由到新版本的Pod应用,继续观察能否稳定地按期望的方式运行。确
定没问题之后再继续完成余下的Pod资源滚动更新,否则立即回滚更新操作。这就是所谓的金丝雀发布。

#更新deployment的版本,并配置暂停deployment
[root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.4 -n dev && kubectl rollout pause deployment pc-deployment -n dev
deployment.apps/pc-deployment image updated
deployment.apps/pc-deployment paused
[root@master ~]# kubectl get deployment,rs -n dev
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pc-deployment   3/3     1            3           37m

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/pc-deployment-5d89bdfbf9   3         3         3       37m
replicaset.apps/pc-deployment-675d469f8b   0         0         0       31m
replicaset.apps/pc-deployment-6c9f56fcfb   1         1         0       14s
replicaset.apps/pc-deployment-7865c58bdf   0         0         0       16m

#观察更新状态
[root@master ~]# kubectl rollout status deployment pc-deployment -n dev
Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...

#确保更新的pod没问题了,继续更新
[root@master ~]# kubectl rollout resume deployment pc-deployment -n dev
deployment.apps/pc-deployment resumed

[root@master ~]# kubectl get rs -n dev -o wide
NAME                       DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
pc-deployment-5d89bdfbf9   0         0         0       42m   nginx        nginx:1.17.1   app=nginx-pod,pod-template-hash=5d89bdfbf9
pc-deployment-675d469f8b   0         0         0       35m   nginx        nginx:1.17.2   app=nginx-pod,pod-template-hash=675d469f8b
pc-deployment-6c9f56fcfb   3         3         3       5m    nginx        nginx:1.17.4   app=nginx-pod,pod-template-hash=6c9f56fcfb
pc-deployment-7865c58bdf   0         0         0       21m   nginx        nginx:1.17.3   app=nginx-pod,pod-template-hash=7865c58bdf

[root@master ~]# kubectl get pods -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-6c9f56fcfb-q5h8v   1/1     Running   0          2m
pc-deployment-6c9f56fcfb-qsjxj   1/1     Running   0          5m25s
pc-deployment-6c9f56fcfb-zxfsc   1/1     Running   0          2m3s

删除Deployment

#删除deployment, 其下的rs和pod也将被删除
[root@master ~]# kubectl delete -f pc-deployment.yaml 
deployment.apps "pc-deployment" deleted

6.4 Horizontal Pod Autoscaler(HPA)

在前面的课程中,我们可以通过手工执行kubectl scale 命令实现Pod扩容,但是这显然不符合Kubernetes
的定位目标–自动化、智能化。Kubernetes期望可以通过监测Pod的使用情况, 实现pod数量的自动调整,于是就
产生了HPA这种控制器。
HPA可以获取每个pod利用率,然后和HPA中定义的指标进行对比,同时计算出需要伸缩的具体值,最后实现
pod的数量的调整。其实HPA与之前的Deployment一样,也属于一种Kubernetes资源对象,它通过追踪分析目标
pod的负载变化情况,来确定是否需要针对性地调整目标pod的副本数。
在这里插入图片描述
1安装metrics-server
metrics-server可以用来收集集群中的资源使用情况

#安装git
[root@master ~]# yum install git -y
#获取metrics-server,注意使用的版本
[root@master ~]# git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server
[root@master ~]# ls metrics-server/
cmd                 CONTRIBUTING.md  Gopkg.lock  hack     Makefile  OWNERS_ALIASES  README.md          vendor
code-of-conduct.md  deploy           Gopkg.toml  LICENSE  OWNERS    pkg             SECURITY_CONTACTS  version

#修改deployment,注意修改的是镜像和初始化参数
[root@master ~]# cd metrics-server/deploy/1.8+/
[root@master 1.8+]# vim metrics-server-deployment.yaml 

hostNetwork: true
image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6
args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP

在这里插入图片描述

#安装metrics-server
[root@master 1.8+]# kubectl apply -f ./
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

#查看启动的pod
[root@master 1.8+]# kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
metrics-server-6b976979db-fjh6q   1/1     Running   0          81s

#使用kubectl top node查看资源使用情况(稍微等待一会在执行)
[root@master 1.8+]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   328m         16%    1099Mi          58%       
node1    40m          2%     682Mi           36%       
node2    77m          3%     350Mi           18%   

[root@master 1.8+]# kubectl top pod -n kube-system
NAME                              CPU(cores)   MEMORY(bytes)   
coredns-6955765f44-jmr9b          3m           8Mi             
coredns-6955765f44-wrzpn          3m           10Mi            
etcd-master                       23m          135Mi           
kube-apiserver-master             43m          304Mi           
kube-controller-manager-master    26m          43Mi            
kube-flannel-ds-amd64-ltfjq       4m           8Mi             
kube-flannel-ds-amd64-xqrqj       4m           11Mi            
kube-proxy-fdp9p                  1m           14Mi            
kube-proxy-lqxxn                  1m           29Mi            
kube-proxy-w7xwm                  2m           15Mi            
kube-scheduler-master             5m           15Mi            
metrics-server-6b976979db-fjh6q   1m           11Mi   

#至此,netrics-server安装完成

2准备deployment和service
为了操作简单,直接使用命令

#创建deployment
[root@master 1.8+]# kubectl run nginx --image=nginx:1.17.1 --requests=cpu=100m -n dev

#创建service (--type=Nodeport是外部都可以访问)
[root@master 1.8+]# kubectl expose deployment nginx --type=NodePort --port=80 -n dev

#查看创建的资源
[root@master 1.8+]# kubectl get deploy,pod,svc -n dev
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx    1/1     1            1           2m6s

NAME                          READY   STATUS    RESTARTS   AGE
pod/nginx-778cb5fb7b-vhlfn    1/1     Running   0          2m6s

NAME            TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/nginx   NodePort   10.99.6.142   <none>        80:30135/TCP   6s

#可以用master的ip+30135端口去访问

3部署HPA

创建pc-hpa.yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: pc-hpa
  namespace: dev
spec:
  minReplicas: 1 # 最小pod数量
  maxReplicas: 10 #最大pod数量
  targetCPUUtilizationPercentage: 3 # CPU使用率指标
  scaleTargetRef: # 指定要控制的nginx信息
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
#创建hpa
[root@master ~]# kubectl create -f pc-hpa.yaml 
horizontalpodautoscaler.autoscaling/pc-hpa created
#查看hpa
[root@master ~]# kubectl get hpa -n dev
NAME     REFERENCE          TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   <unknown>/3%   1         10        0          8s
[root@master ~]# kubectl get hpa -n dev
NAME     REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   0%/3%     1         10        1          94s

为了效果明显,在开3个master窗口,监控实时变化

1个窗口监控deployment的变化

1个窗口监控pod的变化

1个窗口监控hpa的变化

我们用ab进行压测

#安装ab压测命令
[root@node2 ~]# yum install httpd-tools -y
[root@node2 ~]# ab -n 100000 -c 1000 http://10.0.0.103:30135/
[root@master ~]# kubectl get deploy -n dev -w
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
nginx    1/1     1            1           55m
taint1   1/1     1            1           12d
taint2   1/1     1            1           12d
taint3   1/1     1            1           12d
nginx    1/4     1            1           79m
nginx    1/4     1            1           79m
nginx    1/4     1            1           79m
nginx    1/4     4            1           79m
nginx    2/4     4            2           80m
nginx    3/4     4            3           80m
nginx    4/4     4            4           80m
nginx    4/6     4            4           80m
nginx    4/6     4            4           80m
nginx    4/6     4            4           80m
nginx    4/6     6            4           80m
nginx    5/6     6            5           80m
nginx    6/6     6            6           80m
nginx    6/10    6            6           80m
nginx    6/10    6            6           80m
nginx    6/10    6            6           80m
nginx    6/10    10           6           80m
nginx    7/10    10           7           80m
nginx    8/10    10           8           80m
nginx    9/10    10           9           80m
nginx    10/10   10           10          80m
nginx    10/1    10           10          86m
nginx    10/1    10           10          86m
nginx    1/1     1            1           86m

[root@master ~]# kubectl get pods -n dev -w
NAME                      READY   STATUS    RESTARTS   AGE
nginx-778cb5fb7b-vhlfn    1/1     Running   0          55m
pod-toleration            1/1     Running   0          12d
taint1-766c47bf55-rxzbx   1/1     Running   0          12d
taint2-84946958cf-5m6nd   1/1     Running   0          12d
taint3-57d45f9d4c-s8xbv   1/1     Running   0          12d
nginx-778cb5fb7b-47n2f    0/1     Pending   0          0s
nginx-778cb5fb7b-frbst    0/1     Pending   0          0s
nginx-778cb5fb7b-47n2f    0/1     Pending   0          0s
nginx-778cb5fb7b-km65w    0/1     Pending   0          0s
nginx-778cb5fb7b-frbst    0/1     Pending   0          0s
nginx-778cb5fb7b-km65w    0/1     Pending   0          0s
nginx-778cb5fb7b-47n2f    0/1     ContainerCreating   0          0s
nginx-778cb5fb7b-frbst    0/1     ContainerCreating   0          0s
nginx-778cb5fb7b-km65w    0/1     ContainerCreating   0          0s
nginx-778cb5fb7b-frbst    1/1     Running             0          13s
nginx-778cb5fb7b-km65w    1/1     Running             0          13s
nginx-778cb5fb7b-47n2f    1/1     Running             0          13s
nginx-778cb5fb7b-2vkqj    0/1     Pending             0          0s
nginx-778cb5fb7b-9mr68    0/1     Pending             0          0s
nginx-778cb5fb7b-2vkqj    0/1     Pending             0          0s
nginx-778cb5fb7b-9mr68    0/1     Pending             0          0s
nginx-778cb5fb7b-2vkqj    0/1     ContainerCreating   0          0s
nginx-778cb5fb7b-9mr68    0/1     ContainerCreating   0          0s
nginx-778cb5fb7b-2vkqj    1/1     Running             0          9s
nginx-778cb5fb7b-9mr68    1/1     Running             0          10s
nginx-778cb5fb7b-9hw7f    0/1     Pending             0          0s
nginx-778cb5fb7b-jr5qk    0/1     Pending             0          0s
nginx-778cb5fb7b-9hw7f    0/1     Pending             0          0s
nginx-778cb5fb7b-nvzbz    0/1     Pending             0          0s
nginx-778cb5fb7b-jr5qk    0/1     Pending             0          0s
nginx-778cb5fb7b-4r5hl    0/1     Pending             0          0s
nginx-778cb5fb7b-nvzbz    0/1     Pending             0          0s
nginx-778cb5fb7b-9hw7f    0/1     ContainerCreating   0          0s
nginx-778cb5fb7b-4r5hl    0/1     Pending             0          0s
nginx-778cb5fb7b-jr5qk    0/1     ContainerCreating   0          1s
nginx-778cb5fb7b-nvzbz    0/1     ContainerCreating   0          1s
nginx-778cb5fb7b-4r5hl    0/1     ContainerCreating   0          1s
nginx-778cb5fb7b-jr5qk    1/1     Running             0          6s
nginx-778cb5fb7b-4r5hl    1/1     Running             0          6s
nginx-778cb5fb7b-9hw7f    1/1     Running             0          6s
nginx-778cb5fb7b-nvzbz    1/1     Running             0          6s
nginx-778cb5fb7b-47n2f    1/1     Terminating         0          6m35s
nginx-778cb5fb7b-jr5qk    1/1     Terminating         0          5m50s
nginx-778cb5fb7b-9hw7f    1/1     Terminating         0          5m50s
nginx-778cb5fb7b-nvzbz    1/1     Terminating         0          5m50s
nginx-778cb5fb7b-9mr68    1/1     Terminating         0          6m20s
nginx-778cb5fb7b-km65w    1/1     Terminating         0          6m35s
nginx-778cb5fb7b-frbst    1/1     Terminating         0          6m35s
nginx-778cb5fb7b-4r5hl    1/1     Terminating         0          5m50s
nginx-778cb5fb7b-2vkqj    1/1     Terminating         0          6m20s
nginx-778cb5fb7b-jr5qk    0/1     Terminating         0          5m55s
nginx-778cb5fb7b-4r5hl    0/1     Terminating         0          5m55s
nginx-778cb5fb7b-nvzbz    0/1     Terminating         0          5m55s
nginx-778cb5fb7b-9hw7f    0/1     Terminating         0          5m55s
nginx-778cb5fb7b-frbst    0/1     Terminating         0          6m40s
nginx-778cb5fb7b-2vkqj    0/1     Terminating         0          6m25s
nginx-778cb5fb7b-47n2f    0/1     Terminating         0          6m41s
nginx-778cb5fb7b-nvzbz    0/1     Terminating         0          5m56s
nginx-778cb5fb7b-nvzbz    0/1     Terminating         0          5m56s
nginx-778cb5fb7b-jr5qk    0/1     Terminating         0          5m57s
nginx-778cb5fb7b-47n2f    0/1     Terminating         0          6m42s
nginx-778cb5fb7b-km65w    0/1     Terminating         0          6m43s
nginx-778cb5fb7b-9hw7f    0/1     Terminating         0          5m58s
nginx-778cb5fb7b-9hw7f    0/1     Terminating         0          5m59s
nginx-778cb5fb7b-9hw7f    0/1     Terminating         0          5m59s
nginx-778cb5fb7b-frbst    0/1     Terminating         0          6m44s
nginx-778cb5fb7b-frbst    0/1     Terminating         0          6m44s
nginx-778cb5fb7b-frbst    0/1     Terminating         0          6m44s
nginx-778cb5fb7b-9mr68    0/1     Terminating         0          6m30s
nginx-778cb5fb7b-9mr68    0/1     Terminating         0          6m30s
nginx-778cb5fb7b-9mr68    0/1     Terminating         0          6m30s
nginx-778cb5fb7b-4r5hl    0/1     Terminating         0          6m
nginx-778cb5fb7b-4r5hl    0/1     Terminating         0          6m
nginx-778cb5fb7b-4r5hl    0/1     Terminating         0          6m
nginx-778cb5fb7b-2vkqj    0/1     Terminating         0          6m31s
nginx-778cb5fb7b-2vkqj    0/1     Terminating         0          6m31s
nginx-778cb5fb7b-2vkqj    0/1     Terminating         0          6m31s
nginx-778cb5fb7b-km65w    0/1     Terminating         0          6m47s
nginx-778cb5fb7b-km65w    0/1     Terminating         0          6m47s
nginx-778cb5fb7b-47n2f    0/1     Terminating         0          6m50s
nginx-778cb5fb7b-47n2f    0/1     Terminating         0          6m50s
nginx-778cb5fb7b-jr5qk    0/1     Terminating         0          6m5s
nginx-778cb5fb7b-jr5qk    0/1     Terminating         0          6m5s
[root@master ~]# kubectl get hpa -n dev -w
NAME     REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   0%/3%     1         10        1          3m52s
pc-hpa   Deployment/nginx   0%/3%     1         10        1          5m20s
pc-hpa   Deployment/nginx   1%/3%     1         10        1          18m
pc-hpa   Deployment/nginx   0%/3%     1         10        1          19m
pc-hpa   Deployment/nginx   0%/3%     1         10        1          24m
pc-hpa   Deployment/nginx   17%/3%    1         10        1          27m
pc-hpa   Deployment/nginx   17%/3%    1         10        4          27m
pc-hpa   Deployment/nginx   17%/3%    1         10        6          27m
pc-hpa   Deployment/nginx   139%/3%   1         10        6          28m
pc-hpa   Deployment/nginx   139%/3%   1         10        10         28m
pc-hpa   Deployment/nginx   0%/3%     1         10        10         29m
pc-hpa   Deployment/nginx   0%/3%     1         10        10         33m
pc-hpa   Deployment/nginx   0%/3%     1         10        1          34m

6.5 DaemonSet(DS)

DaemonSet类型的控制器可以保证集群中的每一台(或指定)节点上都运行一个副本,一般适用于日志收集、
节点监控等场景。也就是说,如果一个pod提供的功能是节点级别的(每个节点都需要且只需要一个), 那么这类
Pod就适合使用DaemonSet类型的控制器创建。
在这里插入图片描述
DaemonSet控制器的特点:
●每当向集群中添加一个节点时,指定的pod副本也将添加到该节点上
●当节点从集群中移除时,pod也就被垃圾回收了

下面先来看下DaemonSet的资源清单文件

apiVersion: apps/v1 #版本号
kind: DaemonSet #类型
metadata: #元数据
  name: # rs名称
  namespace: #所属命名空间
  labels: #标签
    controller: daemonset
spec: #详情描述
  revisionHistoryLimit: 3 #保留历史版本
  updateStrategy: #更新策略
    type: RollingUpdate #滚动更新策略
    rollingUpdate: #滚动更新
      maxUnavailable: 1 #最大不可用状态的Pod的最大值,可以为百分比,也可以为整数
  selector: #选择器,通过它指定该控制器管理哪些pod
    matchLabels: # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: #模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

创建pc-daemonset.yaml,内容如下:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: pc-daemonset
  namespace: dev
spec:
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1

#创建daemonset
[root@master ~]# kubectl create -f pc-daemonset.yaml
#查看daemonset
[root@master ~]# kubectl get ds pc-daemonset -n dev
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
pc-daemonset   2         2         2       2            2           <none>          10m

[root@master ~]# kubectl get pods -n dev -o wide
#(如果有node3节点,会在node3节点立马启动一个pod)

#删除daemonset
[root@master ~]# kubectl delete -f pc-daemonset.yaml 
daemonset.apps "pc-daemonset" deleted

6.6 Job

Job,主要用于负责**批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)**任务。Job特点如下:
●当ob创建的pod执行成功时, Job将记录成功结束的pod数量
●当成功结束的pod达到指定的数量时, Job将完成执行
在这里插入图片描述

apiVersion: batch/v1 #版本号
kind: Job #类型
metadata: #元数据
  name: # rs名称
  namespace: #所属命名空间
  labels: #标签
    controller: job
spec: #详情描述
  completions: 1 #指定job需要成功运行Pods的次数。默认值: 1
  parallelism: 1 #指定job在任一时刻应该并发运行Pods的数量 。默认值: 1
  activeDeadlineSeconds: 30 #指定job可运行的时间期限,超过时间还未结束,系统将会尝试进行终止。
  backoffLimit: 6 #指定job失败后进行重试的次数。默认是6
  manualSelector: true #是否可以使用selector选择器选择pod,默认是false
  selector: #选择器,通过它指定该控制器管理哪些pod
    matchLabels:# Labels匹配规则
      app: counter-pod
    matchExpressions: # Expressions匹配规则
      - {key: app,operator: In,values: [counter-pod]}
  template: #模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: counter-pod
    spec:
      restartPolicy: Never #重启策略只能设置为Never或者OnFailure
      containers:
      - name: counter
        image: busybox:1.30
        command: [ "bin/sh", "-c","fori in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 2 ;done" ]

关于重启策略设置的说明:
如果指定为OnFailure,则job会在pod出现故障时重启容器,而不是创建pod, failed次数不变
如果指定为Never,则job会在pod出现故障时创建新的pod,并且故障pod不会消失,也不会重启,failed次数加1
如果指定为Always的话,就意味着一直重启, 意味着job任务会重复去执行了,当然不对,所以不能设置为Always

创建pc-job.yaml,内容如下:

apiVersion: batch/v1
kind: Job
metadata:
  name: pc-job
  namespace: dev
spec:
  manualSelector: true
  selector:
    matchLabels:
      app: counter-pod
  template:
    metadata:
      labels:
        app: counter-pod
    spec:
      restartPolicy: Never
      containers:
      - name: counter
        image: busybox:1.30
        command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3 ;done" ]

#创建job
[root@master ~]# kubectl create -f pc-job.yaml 
job.batch/pc-job created

为了效果明显,在开启两个master窗口,一个监控job,一个监控pod

[root@master ~]# kubectl get job -n dev -w
NAME     COMPLETIONS   DURATION   AGE
pc-job   0/1                      0s
pc-job   0/1           0s         0s
pc-job   1/1           30s        30s

[root@master ~]# kubectl get pod -n dev -w
NAME           READY   STATUS    RESTARTS   AGE
pc-job-rl6qq   0/1     Pending   0          0s
pc-job-rl6qq   0/1     Pending   0          0s
pc-job-rl6qq   0/1     ContainerCreating   0          0s
pc-job-rl6qq   1/1     Running             0          3s
pc-job-rl6qq   0/1     Completed           0          30s

测试第二个参数

#先删除job
[root@master ~]# kubectl delete -f pc-job.yaml 
job.batch "pc-job" deleted

#修改pc-job.yaml文件的配置
apiVersion: batch/v1
kind: Job
metadata:
  name: pc-job
  namespace: dev
spec:
  manualSelector: true
  completions: 6 #指定job需要成功运行Pods的次数。默认值: 1
  parallelism: 3 #指定job在任一时刻应该并发运行Pods的数量 。默认值: 1
  selector:
    matchLabels:
      app: counter-pod
  template:
    metadata:
      labels:
        app: counter-pod
    spec:
      restartPolicy: Never
      containers:
      - name: counter
        image: busybox:1.30
        command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3 ;done" ]

#开启监控job的窗口(先开启)
[root@master ~]# kubectl get job -n dev -w
NAME     COMPLETIONS   DURATION   AGE
pc-job   0/6                      0s
pc-job   0/6           0s         0s
pc-job   1/6           30s        30s
pc-job   2/6           30s        30s
pc-job   3/6           31s        31s
pc-job   4/6           61s        61s
pc-job   5/6           61s        61s
pc-job   6/6           61s        61s

#接下来,调整下pod运行的总数量和并行数量即:在spec下设置下面两个选项
#completions: 6 #指定job需要成功运行Pods的次数为6
#parallelism: 3 #指定job并发运行Pods的数量为3
#然后重新运行job,观察效果,此时会发现,job会每次运行3个pod,总共执行了6个pod

#开启监控pod的窗口(先开启)
[root@master ~]# kubectl get pod -n dev -w
NAME           READY   STATUS    RESTARTS   AGE
pc-job-7wxg7   0/1     Pending   0          0s
pc-job-ktg88   0/1     Pending   0          0s
pc-job-mj987   0/1     Pending   0          0s
pc-job-7wxg7   0/1     Pending   0          0s
pc-job-ktg88   0/1     Pending   0          0s
pc-job-mj987   0/1     Pending   0          0s
pc-job-7wxg7   0/1     ContainerCreating   0          0s
pc-job-ktg88   0/1     ContainerCreating   0          0s
pc-job-mj987   0/1     ContainerCreating   0          0s
pc-job-7wxg7   1/1     Running             0          3s
pc-job-ktg88   1/1     Running             0          3s
pc-job-mj987   1/1     Running             0          3s
pc-job-7wxg7   0/1     Completed           0          30s
pc-job-7nrvk   0/1     Pending             0          0s
pc-job-7nrvk   0/1     Pending             0          0s
pc-job-ktg88   0/1     Completed           0          30s
pc-job-fk6ml   0/1     Pending             0          0s
pc-job-fk6ml   0/1     Pending             0          0s
pc-job-7nrvk   0/1     ContainerCreating   0          0s
pc-job-fk6ml   0/1     ContainerCreating   0          0s
pc-job-mj987   0/1     Completed           0          31s
pc-job-dw8hp   0/1     Pending             0          0s
pc-job-dw8hp   0/1     Pending             0          0s
pc-job-dw8hp   0/1     ContainerCreating   0          0s
pc-job-fk6ml   1/1     Running             0          3s
pc-job-7nrvk   1/1     Running             0          3s
pc-job-dw8hp   1/1     Running             0          2s
pc-job-fk6ml   0/1     Completed           0          30s
pc-job-7nrvk   0/1     Completed           0          31s
pc-job-dw8hp   0/1     Completed           0          30s

#创建新的job(后执行)
[root@master ~]# kubectl create -f pc-job.yaml 
job.batch/pc-job created

#删除job
[root@master ~]# kubectl delete -f pc-job.yaml 
job.batch "pc-job" deleted

6.7 CronJob(cj)

CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,Job控制器定义的作业任务在其控
制器资源创建之后便会立即执行,但CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运
时间点重复运行的方式。也就是说,CronJob可以在特定的时间点(反复的)去运行job任务
在这里插入图片描述
Cronjob的资源清单文件:

apiVersion: batch/v1beta1 #版本号
kind: CronJob #类型
metadata: #元数据
  name: # rs名称
  namespace: #所属命名空间
  labels: #标签
    controller: cronjob
spec: #详情描述
  schedule: # cron格式的作业调度运行时间点,用于控制任务在什么时间执行
  concurrencyPolicy: #并发执行策略,用于定义前一-次作业运行尚未完成时是否以及如何运行后一次的作业
  failedJobHistoryLimit: #为失败的任务执行保留的历史记录数,默认为1
  successfulJobHistoryLimit: #为成功的任务执行保留的历史记录数,默认为3
  startingDeadlineSeconds: #启动作业错误的超时时长
  jobTemplate: # job控制器模板,用于为cronjob控制器生成job对象;下面其实就是job的定义
    metadata:
    spec:
      completions: 1
      parallelism: 1
      activeDeadlineSeconds: 30
      backoffLimit: 6
      manualSelector: true
      selector:
        matchLabels:
          app: counter-pod
        matchExpressions:规则
          - {key: app,operator: In, values: [counter-pod]}
      template:
        metadata:
          labels:
            app: counter-pod
        spec:
          restartPolicy: Never
          containers:
          - name: counter
            image: busybox:1.30
            command: [ "bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 2;done" ]

需要重点解释的几个选项:
schedule: cron表达式,用于指定任务的执行时间
*/1    *     *     *      *
<分钟> <小时> <> <月份> <星期>
分钟值从θ到59.
小时值从0到23.
日值从1到31.
月值从1到12.
星期值从θ到6,0代表星期日
多个时间可以用逗号隔开;范围可以用连字符给出; *可以作为通配符; /表示每 ...
concurrencyPolicy:
Allow:  允许Jobs并发运行(默认)
Forbid: 禁止并发运行, 如果上一次运行尚未完成,则跳过下一次运行
Replace:替换,取消当前正在运行的作业并用新作业替换它

创建pc-crjob.yaml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: pc-cronjob
  namespace: dev
  labels:
    controller: cronjob
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    metadata:
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
          - name: counter
            image: busybox:1 .30
            command: ["bin/sh","-c","for i in 9 8 7 6 5 4 3 2 1; do echo $i;sleep 3 ;done" ]

开启查看cj的窗口

[root@master ~]# kubectl get cj -n dev -w
NAME         SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pc-cronjob   */1 * * * *   False     0        <none>          0s
pc-cronjob   */1 * * * *   False     1        6s              32s
pc-cronjob   */1 * * * *   False     0        36s             62s
pc-cronjob   */1 * * * *   False     1        6s              92s
pc-cronjob   */1 * * * *   False     0        36s             2m2s
pc-cronjob   */1 * * * *   False     0        65s             2m31s

开启查看job的窗口

[root@master ~]# kubectl get pod -n dev -w
NAME                          READY   STATUS    RESTARTS   AGE
pc-cronjob-1619249520-vxwx7   0/1     Pending   0          0s
pc-cronjob-1619249520-vxwx7   0/1     Pending   0          0s
pc-cronjob-1619249520-vxwx7   0/1     ContainerCreating   0          0s
pc-cronjob-1619249520-vxwx7   1/1     Running             0          2s
pc-cronjob-1619249520-vxwx7   0/1     Completed           0          29s
pc-cronjob-1619249580-9lmr2   0/1     Pending             0          0s
pc-cronjob-1619249580-9lmr2   0/1     Pending             0          0s
pc-cronjob-1619249580-9lmr2   0/1     ContainerCreating   0          0s
pc-cronjob-1619249580-9lmr2   1/1     Running             0          2s
pc-cronjob-1619249580-9lmr2   0/1     Completed           0          29s
pc-cronjob-1619249520-vxwx7   0/1     Terminating         0          119s
pc-cronjob-1619249580-9lmr2   0/1     Terminating         0          59s
pc-cronjob-1619249580-9lmr2   0/1     Terminating         0          59s
pc-cronjob-1619249520-vxwx7   0/1     Terminating         0          119s

开启查看pod的窗口

[root@master ~]# kubectl get job -n dev -w
NAME                    COMPLETIONS   DURATION   AGE
pc-cronjob-1619249520   0/1                      0s
pc-cronjob-1619249520   0/1           0s         0s
pc-cronjob-1619249520   1/1           29s        29s
pc-cronjob-1619249580   0/1                      0s
pc-cronjob-1619249580   0/1           0s         0s
pc-cronjob-1619249580   1/1           29s        29s
pc-cronjob-1619249580   1/1           29s        59s
pc-cronjob-1619249520   1/1           29s        119s

#创建cronjob
[root@master ~]# kubectl create -f pc-crjob.yaml 
cronjob.batch/pc-cronjob created

#删除cronjob
[root@master ~]# kubectl delete -f pc-crjob.yaml 
cronjob.batch "pc-cronjob" deleted

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值