【Kubernetes 008】多种类型控制器区别及实际操作详解(RS,Deployment,DaemonSet,Job,ConJob)

Pod是k8s中的基本结构,前面我们已经创建过一个。但是生产环境中往往不需要我们自己去创建pod,而是通过创建管理pod的控制器而达到自动批量管理pod的目的。这一节我们就一起来看看k8s中有哪些控制器,以及这些控制器的一些基本操作。

我是T型人小付,一位坚持终身学习的互联网从业者。喜欢我的博客欢迎在csdn上关注我,如果有问题欢迎在底下的评论区交流,谢谢。

控制器类型

k8s的控制器一共有如下几种:

  • ReplicationController(已弃用)和ReplicaSet
  • Deployment
  • DaemonSet
  • StatefulSet
  • Job/CronJob
  • Horizontal Pod Autoscaling

下面对每种进行详细介绍

ReplicationController和ReplicaSet

RC和RS的作用一致,都是确保pod的副本数量维持在期望水平,但是RS可以根据容器的label进行集合式的选择,这个RC是做不到的,所以RC被RS淘汰了。

Deployment

Deployment为pod和RS提供一个声明式(declarative)方法,替代以前的RC来管理应用。

命令式编程:一步步告诉程序应该执行的命令
声明式编程:只声明一个结果,并不给出具体步骤,然后让计算机去实现

典型的应用场景如下:

  • 定义Deployment来创建pod和RS
  • 滚动升级和回滚应用
    通过创建新的RS在下面创建pod,并停止旧RS来进行升级。重新启动旧RS而停止新RS来达到回滚的目的。
  • 扩容和缩容
  • 暂停和继续Deployment

在可选RS和Deployment的情况下,优先选择Deployment。

DaemonSet

DaemonSet确保全部或一些Node上运行一个pod的副本。当有Node加入集群时,也会为该Node创建一个pod副本。

典型的应用场景如下:

  • 运行集群存储的Daemon,例如glusterd,ceph
  • 运行日志收集Daemon,例如fluentd,logstash
  • 运行监控的Daemon,例如Prometheus Node Exporter,collectd等

Job

在pod里面部署一些脚本单次运行,并确保这些pod成功结束。

CronJob

类似linux的Cron,用来在给定时间点或者是周期循环执行任务。

StatefulSet

StatefulSet为pod提供了唯一的标识,可以保证部署和scale的顺序。区别于RS的无状态服务,StatefulSet是为了解决有状态服务而创建的。典型应用场景如下:

  • 稳定的持久化存储,即pod重新调度后还是能访问到完全相同的存储
  • 稳定的网络标识,即pod重新调度后其podname和hostname不变
  • 有序部署,有序扩展,从0到N-1,通过前面说的Init容器来实现
  • 有序收缩,有序删除,从N-1到0,后启动的pod先停止,避免报错

HPA

HPA有点像上面这些控制器的附属品,通过一些指标,例如cpu,去控制之前提到的控制器达到自动扩缩容的目的。HPA不直接控制pod。

实际操作

以下所有yaml文件都托管在Github:
https://github.com/Victor2Code/centos-k8s-init/tree/master/test%20yaml

RS实际操作

和前面生成pod一样,rs也是通过yaml文件来定义。所有字段的详细解释可以通过kubectl explain rs或者kubectl explain ReplicaSet来查看。

几个重要的字段总结一下

字段类型说明
apiVersionstringextensions/v1beta1
kindstringReplicaSet
specobject
spec.replicasinteger副本的期望数目,默认为1
spec.selectorobject对pod的选择条件
spec.selector.matchLabelsobject
spec.templateobject描述rs管理的pod的信息
spec.template.metadataobject和pod的定义一样,不过注意labels内容要和上面的一致
spec.template.specobject和pod的定义一样

补充apiversion查询手册https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/

或者通过kubectl explain xx 来查询apiversion怎么填

通过yaml文件test-rs.yaml创建一个rs

apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  name: test-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: rs-app
  template:
    metadata:
      labels:
        app: rs-app
    spec:
      containers:
        - name: mynginx
          image: mynginx:v1
          ports:
            - containerPort: 80

这里pod里面只有一个容器,使用的是我重新tag的nginx镜像。

在所有可能被分配pod的node上通过命令docker tag nginx mynginx:v1去重新打标签,目的是为了不让k8s每次去下载latest镜像

创建之后可以看到起了3个pod

[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
curl-6bf6db5c4f-kljp4   1/1     Running   1          3d18h   10.244.1.2    k8s-node1    <none>           <none>
hellok8s                2/2     Running   0          2d23h   10.244.1.6    k8s-node1    <none>           <none>
test-init-main          1/1     Running   31         2d1h    10.244.1.8    k8s-node1    <none>           <none>
test-rs-74mw2           1/1     Running   0          64m     10.244.1.17   k8s-node1    <none>           <none>
test-rs-89l2s           1/1     Running   0          64m     10.244.1.16   k8s-node1    <none>           <none>
test-rs-w8zw8           1/1     Running   0          64m     10.244.0.4    k8s-master   <none>           <none>
test-start-stop         1/1     Running   0          37h     10.244.1.15   k8s-node1    <none>           <none>

上面一共有7个pod,其中以test-rs开头的3个pod是通过rs自动创建的,可以看到其中2个在node1上,另一个在master上。另外4个pod是之前的实验创建的,它们并不归任何控制器管

下面我们尝试删除所有的pod,当然这个命令只会删除default命名空间下的pod,不会删除kube-system下的

[root@k8s-master k8s-test]# kubectl delete pod --all
pod "curl-6bf6db5c4f-kljp4" deleted
pod "hellok8s" deleted
pod "test-init-main" deleted
pod "test-rs-74mw2" deleted
pod "test-rs-89l2s" deleted
pod "test-rs-w8zw8" deleted
pod "test-start-stop" deleted

接着再查看pod

[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
test-rs-gtxln   1/1     Running   0          2m27s   10.244.0.5    k8s-master   <none>           <none>
test-rs-hn4g5   1/1     Running   0          2m27s   10.244.1.20   k8s-node1    <none>           <none>
test-rs-wrrh2   1/1     Running   0          2m27s   10.244.1.19   k8s-node1    <none>           <none>

发现不是由RS创建的pod都不会重启,而由RS创建的3个pod又改了个名字重新出现了。这就是自主式pod和由控制器控制的pod的一个最大不同

此时如果看一下这三个pod的label

[root@k8s-master k8s-test]# kubectl get pod -o wide --show-labels
NAME            READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES   LABELS
test-rs-gtxln   1/1     Running   0          11h   10.244.0.5    k8s-master   <none>           <none>            app=rs-app
test-rs-hn4g5   1/1     Running   0          11h   10.244.1.20   k8s-node1    <none>           <none>            app=rs-app
test-rs-wrrh2   1/1     Running   0          11h   10.244.1.19   k8s-node1    <none>           <none>            app=rs-app

这三个pod的label是一致的,都是app=rs-app。这时我们修改其中一个pod的label

[root@k8s-master k8s-test]# kubectl label pod test-rs-wrrh2 --overwrite app=rs-app1
pod/test-rs-wrrh2 labeled
[root@k8s-master k8s-test]# kubectl get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE    LABELS
test-rs-hn4g5   1/1     Running   0          11h    app=rs-app
test-rs-n5zxb   1/1     Running   0          5s     app=rs-app
test-rs-pph4b   1/1     Running   0          101s   app=rs-app
test-rs-wrrh2   1/1     Running   0          11h    app=rs-app1

通过kubectl label --help可以查看标签的一些命令,要修改一个pod已经存在的一个标签要加上--overwrite

我们发现rs又给我们新建了一个标签为app=rs-app的pod。

这就是rs的一个机制,就是通过label去判断到底哪些pod是归这个rs管的。当有一个pod的label变了,rs发现自己下面的pod少了一个,就又会新建一个来达到期望值3。而app=rs-app1的pod就变为了自主pod,删除了也不会再重启了。

现在删除刚才创建的rs

[root@k8s-master k8s-test]# kubectl delete rs test-rs
replicaset.extensions "test-rs" deleted
[root@k8s-master k8s-test]# kubectl get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE   LABELS
test-rs-wrrh2   1/1     Running   0          11h   app=rs-app1

属于rs下面的3个pod都随着rs被删除,而自主式pod并没有影响。

Deployment实际操作

下面这个图很好地表现了deployment通过新建rs来达到滚动更新以及回退的过程
1-deployment.png

下面我们用实际操作来验证一下这个过程。

通过yaml文件test-deployment.yaml来创建一个deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: deployment-app
    spec:
      containers:
        - name: mynginx
          image: mynginx:v1
          ports:
            - containerPort: 80

这里的字段和rs中的差不多,不过少了selector,一会我们就能看到为什么不需要这个selector了。

创建好deployment后会发现它创建了一个rs,名字为deployment的名字加一个hash值,并且下面有3个pod,名字为rs的名字后面再接一个hash值

[root@k8s-master k8s-test]# kubectl apply -f test-deployment.yaml
deployment.extensions/test-deployment created
[root@k8s-master k8s-test]# kubectl get deployment
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
test-deployment   3/3     3            3           5s
[root@k8s-master k8s-test]# kubectl get rs
NAME                        DESIRED   CURRENT   READY   AGE
test-deployment-d796d98d4   3         3         3       16s
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
test-deployment-d796d98d4-kkm25   1/1     Running   0          21s   10.244.1.29   k8s-node1   <none>           <none>
test-deployment-d796d98d4-qknvs   1/1     Running   0          21s   10.244.1.30   k8s-node1   <none>           <none>
test-deployment-d796d98d4-v5v85   1/1     Running   0          21s   10.244.1.28   k8s-node1   <none>           <none>

如果查看每个pod的标签,会发现deployment自动为每个pod加上了一个叫做pod-template-xxx的标签

[root@k8s-master k8s-test]# kubectl get pod --show-labels
NAME                              READY   STATUS    RESTARTS   AGE   LABELS
test-deployment-d796d98d4-kkm25   1/1     Running   0          12m   app=deployment-app,pod-template-hash=d796d98d4
test-deployment-d796d98d4-qknvs   1/1     Running   0          12m   app=deployment-app,pod-template-hash=d796d98d4
test-deployment-d796d98d4-v5v85   1/1     Running   0          12m   app=deployment-app,pod-template-hash=d796d98d4

所以即使我们不加上自己的selector,deployment也会根据这个新的label来进行pod选择。

扩缩容

下面来试一下将3个pod副本自动进行扩缩容。

直接一条命令指定扩缩容后新的副本数量即可,格式为

kubectl scale deployment <deployment_name> --replicas=n

例如将刚才的pod副本扩容为5个

[root@k8s-master k8s-test]# kubectl scale deployment test-deployment --replicas=5
deployment.extensions/test-deployment scaled
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
test-deployment-d796d98d4-kkm25   1/1     Running   0          18m   10.244.1.29   k8s-node1    <none>           <none>
test-deployment-d796d98d4-pwz46   1/1     Running   0          13s   10.244.0.8    k8s-master   <none>           <none>
test-deployment-d796d98d4-qknvs   1/1     Running   0          18m   10.244.1.30   k8s-node1    <none>           <none>
test-deployment-d796d98d4-v5v85   1/1     Running   0          18m   10.244.1.28   k8s-node1    <none>           <none>
test-deployment-d796d98d4-w2k2w   1/1     Running   0          13s   10.244.1.31   k8s-node1    <none>           <none>

再试试缩容到1个

[root@k8s-master k8s-test]# kubectl scale deployment test-deployment --replicas=1
deployment.extensions/test-deployment scaled
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
test-deployment-d796d98d4-pwz46   1/1     Running   0          65s   10.244.0.8   k8s-master   <none>           <none>
滚动更新和回滚

下面来试一下前面提到的滚动更新。

首先把pod的副本数量恢复到5个,便于一会儿查看中间过程

[root@k8s-master k8s-test]# kubectl scale deployment test-deployment --replicas=5
deployment.extensions/test-deployment scaled
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
test-deployment-d796d98d4-9x5c2   1/1     Running   0          2s      10.244.1.34   k8s-node1    <none>           <none>
test-deployment-d796d98d4-pwz46   1/1     Running   0          2m41s   10.244.0.8    k8s-master   <none>           <none>
test-deployment-d796d98d4-r2dt7   1/1     Running   0          2s      10.244.1.35   k8s-node1    <none>           <none>
test-deployment-d796d98d4-rrlkc   1/1     Running   0          2s      10.244.1.33   k8s-node1    <none>           <none>
test-deployment-d796d98d4-rs4zx   1/1     Running   0          2s      10.244.1.32   k8s-node1    <none>           <none>

既然要更新那必须得给容器准备一个新的镜像,创建下面这个Dockerfile

FROM mynginx:v1
RUN echo 'this is mynginx v2' > /usr/share/nginx/html/index.html

这里在v1基础上修改了index.html的内容,这样一会curl比较容易验证。然后生成v2版本的mynginx

[root@k8s-master k8s-test]# docker build -t mynginx:v2 .
Sending build context to Docker daemon   12.8kB
Step 1/2 : FROM mynginx:v1
 ---> 602e111c06b6
Step 2/2 : RUN echo 'this is mynginx v2' > /usr/share/nginx/html/index.html
 ---> Running in bdba2f09126d
Removing intermediate container bdba2f09126d
 ---> 418ab2e19eb5
Successfully built 418ab2e19eb5
Successfully tagged mynginx:v2
[root@k8s-master k8s-test]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
mynginx                              v2                  418ab2e19eb5        5 seconds ago       127MB
mynginx                              v1                  602e111c06b6        10 days ago         127MB
nginx                                latest              602e111c06b6        10 days ago         127MB

注意这个新的image必须要所有的node都可以访问到,不然会出现拉取image失败的报错

对deployment的更新命令格式如下

kubectl set image deployment/<deployment_name> <container_name>=<new_image_name>

下面将mynginx的版本更新为v2

[root@k8s-master k8s-test]# kubectl set image deployment/test-deployment mynginx=mynginx:v2
deployment.extensions/test-deployment image updated

之后马上查看pod的状态,此时通过pod的名字可以发现,似乎旧的pod在被停止,而一个新的rs在创建新的pod

[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                               READY   STATUS        RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
test-deployment-64484c94f7-4xwk4   1/1     Running       0          6s    10.244.0.9    k8s-master   <none>           <none>
test-deployment-64484c94f7-646lx   1/1     Running       0          6s    10.244.1.39   k8s-node1    <none>           <none>
test-deployment-64484c94f7-7rg5q   1/1     Running       0          97s   10.244.1.37   k8s-node1    <none>           <none>
test-deployment-64484c94f7-dbthf   1/1     Running       0          8s    10.244.1.38   k8s-node1    <none>           <none>
test-deployment-64484c94f7-lctwk   1/1     Running       0          97s   10.244.1.36   k8s-node1    <none>           <none>
test-deployment-d796d98d4-pwz46    0/1     Terminating   0          19m   10.244.0.8    k8s-master   <none>           <none>
test-deployment-d796d98d4-rrlkc    0/1     Terminating   0          16m   10.244.1.33   k8s-node1    <none>           <none>
test-deployment-d796d98d4-rs4zx    0/1     Terminating   0          16m   10.244.1.32   k8s-node1    <none>           <none>

稍等一会旧的pod就全部被新的取代

[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS   AGE    IP            NODE         NOMINATED NODE   READINESS GATES
test-deployment-64484c94f7-4xwk4   1/1     Running   0          16s    10.244.0.9    k8s-master   <none>           <none>
test-deployment-64484c94f7-646lx   1/1     Running   0          16s    10.244.1.39   k8s-node1    <none>           <none>
test-deployment-64484c94f7-7rg5q   1/1     Running   0          107s   10.244.1.37   k8s-node1    <none>           <none>
test-deployment-64484c94f7-dbthf   1/1     Running   0          18s    10.244.1.38   k8s-node1    <none>           <none>
test-deployment-64484c94f7-lctwk   1/1     Running   0          107s   10.244.1.36   k8s-node1    <none>           <none>

看看是不是新的镜像

[root@k8s-master k8s-test]# curl 10.244.0.9
this is mynginx v2

升级成功,而正如之前的图片上描述的那样,deployment也确实创建了一个新的rs

[root@k8s-master k8s-test]# kubectl get rs
NAME                         DESIRED   CURRENT   READY   AGE
test-deployment-64484c94f7   5         5         5       9m14s
test-deployment-d796d98d4    0         0         0       45m

旧的rs并没有消失,这是为了方便做回滚。

回滚的命令格式如下,回滚到上一个版本

kubectl rollout undo deployment/<deployment_name>

回滚到v1版本的mynginx

[root@k8s-master k8s-test]# kubectl rollout undo deployment/test-deployment
deployment.extensions/test-deployment rolled back
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                               READY   STATUS        RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
test-deployment-64484c94f7-646lx   0/1     Terminating   0          11m   10.244.1.39   k8s-node1    <none>           <none>
test-deployment-64484c94f7-lctwk   0/1     Terminating   0          12m   10.244.1.36   k8s-node1    <none>           <none>
test-deployment-d796d98d4-4qjpj    1/1     Running       0          7s    10.244.1.43   k8s-node1    <none>           <none>
test-deployment-d796d98d4-5kvz7    1/1     Running       0          10s   10.244.1.41   k8s-node1    <none>           <none>
test-deployment-d796d98d4-g4stw    1/1     Running       0          8s    10.244.0.10   k8s-master   <none>           <none>
test-deployment-d796d98d4-rpv8w    1/1     Running       0          10s   10.244.1.40   k8s-node1    <none>           <none>
test-deployment-d796d98d4-twrbk    1/1     Running       0          8s    10.244.1.42   k8s-node1    <none>           <none>
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
test-deployment-d796d98d4-4qjpj   1/1     Running   0          10s   10.244.1.43   k8s-node1    <none>           <none>
test-deployment-d796d98d4-5kvz7   1/1     Running   0          13s   10.244.1.41   k8s-node1    <none>           <none>
test-deployment-d796d98d4-g4stw   1/1     Running   0          11s   10.244.0.10   k8s-master   <none>           <none>
test-deployment-d796d98d4-rpv8w   1/1     Running   0          13s   10.244.1.40   k8s-node1    <none>           <none>
test-deployment-d796d98d4-twrbk   1/1     Running   0          11s   10.244.1.42   k8s-node1    <none>           <none>

值得一提的就是滚动更新的过程中,k8s会始终保证有期望数量的pod在运行,最多不少于期望减一个pod。

历史版本

上面提到回滚只能回到上一个版本,而如果再次执行回滚操作又会回到当前版本。那么有没有可能回滚到更早的版本呢,当然是可以的。

查看一下滚动更新的历史纪录

[root@k8s-master k8s-test]# kubectl rollout history deployment/test-deployment
deployment.extensions/test-deployment
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

我们现在在第3个revision,其中第1个revision是v1版本mynginx,第2个revision是v2版本的mynginx,第3个revision是回滚到的v1版本的mynginx。可以通过字段deployment.spec.revisionHistoryLimit去设置要保留多少历史记录,默认是保留所有的记录。

想要回滚到某个revision,可以用命令

kubectl rollout undo deployment/<deployment_name> --to-revision=n

这里我就不演示了。但是值得说明的是,即使k8s提供了这种revision的功能,但是看起来很不清晰,生产环境还是要做好操作记录和配置备份,通过配置文件去回滚。

同时,还可以查询rollout的状态

[root@k8s-master k8s-test]# kubectl set image deployment/test-deployment mynginx=mynginx:v2
deployment.extensions/test-deployment image updated
[root@k8s-master k8s-test]# kubectl rollout status deployment/test-deployment
Waiting for deployment "test-deployment" rollout to finish: 4 out of 5 new replicas have been updated...
Waiting for deployment "test-deployment" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "test-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "test-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "test-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "test-deployment" rollout to finish: 4 of 5 updated replicas are available...
deployment "test-deployment" successfully rolled out

DaemonSet实际操作

DaemonSet和RS唯一的区别就是不用指定副本数量,因为默认是每个node有且仅有一个副本。

通过yaml文件test-daemonset.yaml去创建一个daemonset

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: test-daemonset
spec:
  selector:
    matchLabels:
      app: daemonset-app
  template:
    metadata:
      labels:
        app: daemonset-app
    spec:
      containers:
        - name: mynginx
          image: mynginx:v2

成功创建,发现每个node有一个pod副本

[root@k8s-master k8s-test]# vim test-daemonset.yaml
[root@k8s-master k8s-test]# kubectl apply -f test-daemonset.yaml
daemonset.extensions/test-daemonset created
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
test-daemonset-7qp6p   1/1     Running   0          10s   10.244.0.12   k8s-master   <none>           <none>
test-daemonset-ml9c7   1/1     Running   0          10s   10.244.1.48   k8s-node1    <none>           <none>

后面再详细讲如何通过给node添加污点,不让daemonset在上面创建pod。

Job实际操作

通过yaml文件test-job.yaml去创建一个跑一次的任务

apiVersion: batch/v1
kind: Job
metadata:
  name: test-job
spec:
  template:
    spec:
      nodeName: k8s-master
      containers:
        - name: pi
          image: perl
          command: ['perl','-Mbignum=bpi','-wle','print bpi(1000)']
      restartPolicy: Never

这里通过nodeName关键字去选择运行该Job的目标node,也可以用nodeSelector批量去选择。如果不指定则k8s任意选择一个node去执行。

成功创建并执行完job

[root@k8s-master k8s-test]# kubectl get job
NAME       COMPLETIONS   DURATION   AGE
test-job   1/1           29s        29s
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME             READY   STATUS      RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
test-job-8qvlr   0/1     Completed   0          34s   10.244.0.13   k8s-master   <none>           <none>

查看pod的日志可以看到结果

[root@k8s-master k8s-test]# kubectl logs test-job-8qvlr
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393607260249141273724587006606315588174881520920962829254091715364367892590360011330530548820466521384146951941511609433057270365759591953092186117381932611793105118548074462379962749567351885752724891227938183011949129833673362440656643086021394946395224737190702179860943702770539217176293176752384674818467669405132000568127145263560827785771342757789609173637178721468440901224953430146549585371050792279689258923542019956112129021960864034418159813629774771309960518707211349999998372978049951059731732816096318595024459455346908302642522308253344685035261931188171010003137838752886587533208381420617177669147303598253490428755468731159562863882353787593751957781857780532171226806613001927876611195909216420199

ConJob实际操作

ConJob是用来创建和管理job的,在cronjob.spec.jobTemplate.spec下有一些字段需要额外说明一下的

  • completions - 指定期望的成功运行job的pod数量,默认为1
  • parallelism - 指定并行pod并发数,默认为1
  • activeDeadlineSeconds - 指定任务运行超时时间,单位为秒

同时在conjob.spec下也有一些字段要注意

  • schedule - 同linux中的contab的写法
  • jobTemplate - 嵌套上面Job的格式
  • startingDeadlineSeconds - 任务启动超时时间
  • concurrencyPolicy - 指定当前一个job还没执行完时,又有一个job需要被执行的的做法
  • successfulJobsHistoryLimit - 指定保留多少个成功的历史记录,默认为3个

下面通过yaml文件test-cronjob.yaml去创建一个定时任务

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: test-conjob
spec:
  schedule: '* * * * *'
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: hello
              image: busybox
              command: ['sh','-c','date;echo hello from the k8s cluster']
          restartPolicy: OnFailure

创建出来以后等待约5分钟,会发现出现了3个job以及3个对应的pod

[root@k8s-master k8s-test]# kubectl get job
NAME                     COMPLETIONS   DURATION   AGE
test-conjob-1588577280   1/1           5s         2m34s
test-conjob-1588577340   1/1           6s         94s
test-conjob-1588577400   1/1           5s         34s
[root@k8s-master k8s-test]# kubectl get pod -o wide
NAME                           READY   STATUS      RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
test-conjob-1588577280-9jp6t   0/1     Completed   0          2m37s   10.244.1.60   k8s-node1   <none>           <none>
test-conjob-1588577340-jzhw6   0/1     Completed   0          97s     10.244.1.62   k8s-node1   <none>           <none>
test-conjob-1588577400-95xc7   0/1     Completed   0          37s     10.244.1.64   k8s-node1   <none>           <none>

即使继续等待也只会出现3个历史记录,由上面说的successfulJobsHistoryLimit参数控制

总结

这一节我们学习了k8s中各种不同的控制器,并且通过实例操作了解了每一种的具体用法。在实际中yaml文件肯定不会像这里这么简单,我们还需要多多用kubectl explain命令去了解每种控制器各个字段的含义,达到熟能生巧的地步。

StatefulSet和HPA的实际操作因为比较复杂这里没有写,我们后来用到的时候再补充。

  • 3
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值