Kuberntes中通过Deployment控制器实现应用的金丝雀发布(Canary Release)

1. 金丝雀发布基本含义

Mining foreman R. Thornburg shows a small cage with a canary used for testing carbon monoxide gas in 1928. (George McCaa, U.S. Bureau of Mines)
上图是带着金丝雀准备下矿井的矿工

金丝雀,是燕雀科丝雀属的一种鸟,20世纪之初,煤矿工人下矿井的时候通常会带着一只金丝雀,用于检测矿井的瓦斯浓度。如果下矿井的过程中金丝雀死掉了,矿工就需要返回。

在应用部署的时候,金丝雀发布,是对已经部署的应用进行升级的时候,先部署一个或者少量比例的应用,然后暂停部署过程,观察已经部署的应用是否能够正常提供服务。如果此时更新的应用能够正常提供服务,则恢复部署过程,继续完成更新;如果此时更新的应用不能正常提供服务,则将该更新回滚。这样就以最小的代价测试新版本应用是否能够在生产环境中正常运行。

关于金丝雀发布,具体如下图所示:
在这里插入图片描述
在金丝雀发布中,少量更新的应用,就充当着矿工手中的金丝雀的角色。

2. 金丝雀发布实现过程

在Kubernetes集群中又是如何实现金丝雀发布的呢?由于金丝雀发布的过程中涉及到滚动更新以及可能的回滚,所以必然需要相应的控制器来管理pod的滚动更新以及回滚。在本节的实验中,我们将会采用Deployment控制器管理pod资源,实现应用更新的时候滚动更新、暂停以及回滚。

而Pod中的容器中运行的镜像,采用Javascript程序,构建好镜像并推送到DockerHub中,为了配合实验金丝雀发布,所以需要构建三个版本的docker镜像。第一版(v2)为正常的版本,第二版(v3)为有问题的版本,最后一个用于演示正常的更新过程,即v4版本镜像。

2.1. 构建docker镜像

分别构建正常的v2版本的docker镜像以及不正常的v3版本的docker镜像,以及正常的v4版本镜像,并推送到DockerHub镜像仓库中。

2.1.1. 构建v2版本的docker镜像

由于此处采用Javascript程序,所以在v2目录下创建一个名为app.js的程序文件,其具体内容如下所示:

const http = require('http');
const os = require('os');

console.log("Kubia server starting...");

var handler = function(request, response) {
  console.log("Received request from " + request.connection.remoteAddress);
  response.writeHead(200);
  response.end("This is v2 running in pod " + os.hostname() + "\n");
};

var www = http.createServer(handler);
www.listen(8080);

上述脚本执行的时候会输出当前镜像的版本信息。

接下来在上述文件所在目录v2中创建一个Dockerfile文件,其内容具体如下所示:

FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]

有了这两个文件,就可以开始构建镜像了,具体如下所示:

[root@c7u6s5 v2]# ls -lh
total 8.0K
-rw-r--r-- 1 root root 376 Jan
16:32 app.js
-rw-r--r-- 1 root root
16:32 Dockerfile
[root@c7u6s5 v2]# docker build -t mindnhand/kubia:v2 $(pwd)
Sending build context to Docker daemon
Step 1/3 : FROM node:7
---> d9aed20b68a4
Step 2/3 : ADD app.js /app.js
---> 337ad47dcab9
Step 3/3 : ENTRYPOINT ["node", "app.js"]
---> Running in d52488f8e237
Removing intermediate container d52488f8e237
---> 6cdeebffba36
Successfully built 6cdeebffba36
Successfully tagged mindnhand/kubia:v2
[root@c7u6s5 v2]#

上面就构建了一个可以正常运行的镜像。

注意:在制作镜像的时候,尽量不要使用latest这类标签,而应该使用可以区分不同版本的标签。以便明确标记镜像版本的变化。如果使用latest标签,那么v1版镜像存在worker节点的时候,如果v2版镜像也使用latest标签,那么在执行更新的时候,就不会拉取被标记为latest的v2版镜像,因为此时worker节点上已经有了标签为latest的镜像。此时如果想要更新目标节点上的镜像,就需要将template中的containers定义中的spec部分,指定imagePullPolicy的属性值为Always,表示总是拉取镜像,不管worker节点本地是否有该镜像。这样做就会增加等待时间,因为已经存在的镜像也会被重新拉取。
如果使用能够明确区分版本,此时就可以将这个镜像拉取策略设置为IfNotPresent,表示worker节点本地不存在该镜像的时候才会进行拉取。对于已经存在的镜像,这个镜像拉取策略可以让pod中的容器快速运行起来,减少等待时间。

接下来将构建好的镜像推送到DockerHub镜像仓库中,具体如下所示:

[root@c7u6s5 v2]# docker push mindnhand/kubia:v2
The push refers to repository [docker.io/mindnhand/kubia]
8bc5e87d4723: Layer already exists
ab90d83fa34a: Layer already exists
8ee318e54723: Layer already exists
e6695624484e: Layer already exists
da59b99bbd3b: Layer already exists
5616a6292c16: Mounted from mindnhand/kubia-unhealthy
f3ed6cb59ab0: Layer already exists
654f45ecb7e3: Mounted from library/node
2c40c66f7667: Mounted from mindnhand/kubia-unhealthy
v2: digest:
sha256:8400f1f571eae71e4e8a9f7e083361596fef8599c101f944cc8bf240a5ded2e9
size: 2213
[root@c7u6s5 v2]#

此后就可以在Deployment资源定义文件中使用该镜像了。

2.1.2. 构建v3版本的docker镜像

基于前面构建的v2版本的镜像,做一些修改,在这个版本中,将会引入一个bug,即只能处理前4个请求,从第5个请求开始将会导致内部服务器错误(HTTP状态代码500)。在v3/app.js文件中增加if条件判断语句,然后构建v3版本的kubia镜像,具体如下所示:

接下来在v3这个目录中创建一个新的app.js文件,其内容如下所示:

const http = require('http');
const os = require('os');

var requestCount = 0;

console.log("Kubia server starting...");

var handler = function(request, response) { 
  console.log("Received request from " + request.connection.remoteAddress);
  if (++requestCount >= 5) { 
    response.writeHead(500);
    response.end("Some internal error has occurred! This is pod " + os.hostname() + "\n");
    return;
  }
  response.writeHead(200);
  response.end("This is v3 running in pod " + os.hostname() + "\n");
};

var www = http.createServer(handler);
www.listen(8080);

上述在v2版本基础上增加了if条件判断,使其只能处理前4个请求,从第5个请求开始无法正常响应请求。

接下来在上述文件所在的v3目录下创建Dockerfile文件,其内容如下所示:

FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]

有了上述文件,就可以构建v3版本的镜像了,具体如下所示:

[root@c7u6s5 v3]# docker build -t mindnhand/kubia:v3 $(pwd)
Sending build context to Docker daemon
Step 1/3 : FROM node:7
---> d9aed20b68a4
Step 2/3 : ADD app.js /app.js
---> 35b95a9d0e78
Step 3/3 : ENTRYPOINT ["node", "app.js"]
---> Running in f537afca1fad
Removing intermediate container f537afca1fad 3.584kB
---> 3ecb54fa06b2
Successfully built 3ecb54fa06b2
Successfully tagged mindnhand/kubia:v3

上述就完成了镜像构建操作,接下来将镜像推送到DockerHub镜像仓库中,具体如下所示:

[root@c7u6s5 v3]# docker push mindnhand/kubia:v3
The push refers to repository [docker.io/mindnhand/kubia]
d5c2e82e517c: Pushed
ab90d83fa34a: Layer already exists
8ee318e54723: Layer already exists
e6695624484e: Layer already exists
da59b99bbd3b: Layer already exists
5616a6292c16: Layer already exists
f3ed6cb59ab0: Layer already exists
654f45ecb7e3: Mounted from library/node
2c40c66f7667: Layer already exists
v3: digest:
sha256:98fde81970cbbf4bf3075b391272eaf64dc5d665a56355739c67579256a9839e
size: 2213
[root@c7u6s5 v3]#

上述就构建好了v3版本的应用。

2.1.3. 构建v4版本的docker镜像

在v4目录下创建app.js文件,v4/app.js文件的内容如下所示:

http = require('http');
const os = require('os');

console.log("Kubia server starting...");


var handler = function(request, response) {
  console.log("Received request from " + request.connection.remoteAddress);
  response.writeHead(200);
  response.end("This is v4 running in pod " + os.hostname() + "\n");
};

var www = http.createServer(handler);
www.listen(8080);

接下来在上述文件所在的v4目录下创建Dockerfile文件,其内容具体如下所示:

FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]

在v4目录中构建镜像,具体如下所示:

[root@c7u6s5 v4]# docker build -t mindnhand/kubia:v4 $(pwd)
Sending build context to Docker daemon
Step 1/3 : FROM node:7
---> d9aed20b68a4
Step 2/3 : ADD app.js /app.js
---> 689d9e0effdc
Step 3/3 : ENTRYPOINT ["node", "app.js"]
---> Running in 34d0943c8f80
Removing intermediate container 34d0943c8f80 3.072kB
---> 35988b0d81d4
Successfully built 35988b0d81d4
Successfully tagged mindnhand/kubia:v4

[root@c7u6s5 v4]#

上述就完成了docker镜像的构建,接下来将构建好的镜像推送到DockerHub镜像仓库,具体如下所示:

[root@c7u6s5 v4]# docker push mindnhand/kubia:v4
The push refers to repository [docker.io/mindnhand/kubia]
e0759f04e2b1: Pushed
ab90d83fa34a: Layer already exists
8ee318e54723: Mounted from mindnhand/kubia-unhealthy
e6695624484e: Layer already exists
da59b99bbd3b: Mounted from mindnhand/kubia-unhealthy
5616a6292c16: Mounted from mindnhand/kubia-unhealthy
f3ed6cb59ab0: Layer already exists
654f45ecb7e3: Layer already exists
2c40c66f7667: Layer already exists
v4: digest:
sha256:2d5cf93b9d96c15f7cb6a45ff75a4aa5857de846b403dc690467c66d86dead48
size: 2213

[root@c7u6s5 v4]#

至此,所需要的三个版本的镜像就全部构建完成了。

2.2. 部署初始版本应用

通过资源定义文件实现金丝雀发布方式部署应用,创建一个Deployment资源定义文件,其内容具体如下所示:

[root@c7u6s5:09.RollingUpdate]# vim canary_release.yml
[root@c7u6s5:09.RollingUpdate]# cat canary_release.yml

资源定义文件canary_release.yml的内容如下所示:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubia-deploy-canary
  labels:
    rtype: deploy
    rel: canary
    used4: kubia
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kubia
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      name: kubia-pod
      labels:
        rtype: pod
        app: kubia
    spec:
      containers:
      - name: kubia
        image: mindnhand/kubia:v2
        imagePullPolicy: IfNotPresent
        ports:
        - name: web-port
          containerPort: 8080
        readinessProbe:
          periodSeconds: 1
          httpGet: 
            path: /
            port: 8080

上述资源定义文件中,定义了Deployment控制器管理的pod资源复制数为3个(replicas: 3),以及控制器用于选定pod资源的标签选择器为app: kubia

除此之外,与滚动更新和金丝雀发布相关的两个关键定义是minReadySeconds: 10以及strategy部分,前者定义了pod最少在启动之后10秒内处于就绪状态,此后才认为该pod真正处于就绪状态,可以对外提供服务;否则认为该pod资源未处于就绪状态。

而strategy部分定义了更新该Deployment控制器资源的时候,执行滚动更新,其中的type字段可选的值有两个,此处的RollingUpdate是可选值之一,表示执行滚动更新;另一个可选值是Recreate,当指定为这个值的时候,表示删除此前所有的pod资源,并且创建与资源定义文件中指定的replicas字段相同的pod资源个数。这种方式在更新的时候,会将原有的pod资源删掉(本质是删除了Deployment控制器下面隐式创建ReplicaSet控制器,这个rs控制器直接管理pod资源),然后创建新的pod资源(本质是在这个Deployment控制器下面创建新的ReplicaSet控制器资源)。这种方式会造成服务短暂的处于不可用状态,此即蓝绿发布。
另外,在strategy部分的rollingUpdate字段下面分别指定了两个字段,及maxSurge以及maxUnavailable,前者设置为1,表示在更新过程中,允许比指定的replicas数多1个pod资源,即更新过程中允许同时存在最多4个pod资源;后者设置为0,表示在更新过程中,允许比指定的replicas数少0个pod资源,即更新过程中最少保证有3个pod资源处于运行状态才可以。

定义了上述配置之后,还不足以实现金丝雀发布,此时还需要在pod资源定义模板template中定义容器的就绪状态探针readinessProbe,在其中指定了探针探测容器状态的方式为httpGet命令,同时通过字段periodSeconds: 1指定探针的探测周期,此时表示每隔1秒执行一次就绪状态探针,探测容器的状态。

接下来使用上述资源定义文件创建Deployment资源,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl create -f canary_release.yml --dry-run=server
deployment.apps/kubia-deploy-canary created (server dry run)
[root@c7u6s5:09.RollingUpdate]# 
[root@c7u6s5:09.RollingUpdate]# kubectl apply -f canary_release.yml --record=true
deployment.apps/kubia-deploy-canary created
[root@c7u6s5:09.RollingUpdate]# 

上述第一条命令通过选项--dry-run=server表示运行资源创建操作,但并不真正通过API服务器进行资源创建操作,通常用于检查资源定义文件中的语法错误。

第二条执行之后,才会完成真正的资源定义操作,同时通过选项--record=true表示记录资源定义的命令,这个选项对于后面执行回滚、以及查看滚动更新过程来说,是很有用的。

上述完成了Deployment资源定义操作,接下来查看定义好的资源,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubia-deploy-canary   0/3     3            0           4s
[root@c7u6s5:09.RollingUpdate]# kubectl get rs
NAME                            DESIRED   CURRENT   READY   AGE
kubia-deploy-canary-b49dcdcff   3         3         0       7s
[root@c7u6s5:09.RollingUpdate]# kubectl get po
NAME                                  READY   STATUS              RESTARTS   AGE
kubia-deploy-canary-b49dcdcff-8t7wq   0/1     ContainerCreating   0          9s
kubia-deploy-canary-b49dcdcff-l2p86   0/1     ContainerCreating   0          9s
kubia-deploy-canary-b49dcdcff-qvsk8   0/1     ContainerCreating   0          9s
[root@c7u6s5:09.RollingUpdate]# 
[root@c7u6s5:09.RollingUpdate]# kubectl get po
NAME                                  READY   STATUS    RESTARTS   AGE
kubia-deploy-canary-b49dcdcff-8t7wq   1/1     Running   0          2m4s
kubia-deploy-canary-b49dcdcff-l2p86   1/1     Running   0          2m4s
kubia-deploy-canary-b49dcdcff-qvsk8   1/1     Running   0          2m4s
[root@c7u6s5:09.RollingUpdate]# 

等待一会儿,如果给pod分配的节点上没有相关镜像,此时会去DockerHub上拉取相关的镜像,如果已经存在了相关镜像,则直接启动该镜像生成容器即可。上述输出中可以看出,pod资源已经处于运行状态了。

另外,我们在资源定义文件中并未定义ReplicaSet资源,但是Deployment控制器隐式创建了该资源,实际上Deployment控制器资源是通过ReplicaSet控制器资源直接管理pod资源的。关于这一点,在后面执行滚动更新的时候,也可以发现,滚动更新的时候,Deployment控制器资源并未发生变化, 变化的是创建新的ReplicaSet控制器资源,而新的ReplicaSet控制器资源创建出新的pod资源,从而实现pod资源的更新。

此时还无法访问pod中提供的服务,还需要Service资源暴露pod资源提供的服务。此处通过命令行的方式创建Service资源,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl expose deploy kubia-deploy-canary --name=kubia-deploy-canary-svc --protocol=TCP --port=80 --target-port=8080
service/kubia-deploy-canary-svc exposed
[root@c7u6s5:09.RollingUpdate]# kubectl get svc
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes                ClusterIP   10.96.0.1        <none>        443/TCP        33d
kubia-deploy-canary-svc   ClusterIP   10.100.138.230   <none>        80/TCP         4s
[root@c7u6s5:09.RollingUpdate]# 

上述就创建了一个名为kubia-deploy-canary-svc的Service资源,并且指定了该Service资源的端口号为80,对应于pod中容器的8080端口。即访问这个Service资源的80号端口的时候,系统会将数据转发给pod中容器的8080号端口。在上述的创建Service资源的命令中,并未指定Service资源的类型,所以默认是ClusterIP类型,即允许Kubernetes集群内的podi资源通过该Service资源访问目标pod资源提供的服务,并不允许Kubernetes集群外部通过该Service资源访问目标pod资源提供的服务。

接下来为了验证上述定义的Deployment控制器资源以及Service资源是否能正常提供服务,创建一个临时pod资源,并在其中通过Service资源的ClusterIP(10.100.138.230)和端口号(80)的组合访问目标pod资源提供的服务(由于ClusterIP是一个虚拟IP地址,所以这个地址是无法ping通的,单独使用这个IP地址也是没有意义的,需要结合端口号一起使用才有意义)。具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl run kubia-deploy-canary-test -it --restart=Never --image=mindnhand/curl:7.78 --rm -- bash                                                   
If you don't see a command prompt, try pressing enter.
bash-5.1# curl 10.100.138.230:80
This is v2 running in pod kubia-deploy-canary-b49dcdcff-8t7wq
bash-5.1# curl 10.100.138.230:80
^C
bash-5.1# curl http://10.100.138.230:80
This is v2 running in pod kubia-deploy-canary-b49dcdcff-qvsk8
bash-5.1# curl http://10.100.138.230:80
This is v2 running in pod kubia-deploy-canary-b49dcdcff-8t7wq
bash-5.1# 
bash-5.1# curl http://10.100.138.230:80
This is v2 running in pod kubia-deploy-canary-b49dcdcff-l2p86
bash-5.1# curl http://10.100.138.230:80
This is v2 running in pod kubia-deploy-canary-b49dcdcff-8t7wq
bash-5.1# curl http://10.100.138.230:80
This is v2 running in pod kubia-deploy-canary-b49dcdcff-l2p86
bash-5.1# curl http://10.100.138.230:80
This is v2 running in pod kubia-deploy-canary-b49dcdcff-8t7wq
bash-5.1# curl http://10.100.138.230:80
This is v2 running in pod kubia-deploy-canary-b49dcdcff-l2p86

上述就创建了一个临时pod资源,并执行pod中容器的bash命令,进行交互操作,从中可以通过Service资源暴露出来的IP地址以及端口号访问到目标pod资源提供的服务。

2.3. 金丝雀发布有问题的新版应用

接下来准备通过更新Deployment控制器中pod资源的容器镜像的方式,更新应用。在此之前,查看下pod资源的状态,以及Deployment控制器资源的状态,具体如下所示:

[root@c7u6s5:~]# kubectl get po
NAME                                  READY   STATUS    RESTARTS   AGE
kubia-deploy-canary-b49dcdcff-8t7wq   1/1     Running   0          19m
kubia-deploy-canary-b49dcdcff-l2p86   1/1     Running   0          19m
kubia-deploy-canary-b49dcdcff-qvsk8   1/1     Running   0          19m
kubia-deploy-canary-test              1/1     Running   0          9m18s
[root@c7u6s5:~]# kubectl describe pod kubia-deploy-canary-b49dcdcff-8t7wq 
Name:         kubia-deploy-canary-b49dcdcff-8t7wq
Namespace:    default
Priority:     0
Node:         c7u6s8/192.168.122.27
Start Time:   Sun, 05 Sep 2021 22:13:34 +0800
Labels:       app=kubia
              pod-template-hash=b49dcdcff
              rtype=pod
Annotations:  cni.projectcalico.org/containerID: 600d75c841d45c42825400d9b9fae894e0673158add923a800cd29673ea264f7
              cni.projectcalico.org/podIP: 10.244.141.170/32
              cni.projectcalico.org/podIPs: 10.244.141.170/32
Status:       Running
IP:           10.244.141.170
IPs:
  IP:           10.244.141.170
Controlled By:  ReplicaSet/kubia-deploy-canary-b49dcdcff
Containers:
  kubia:
    Container ID:   docker://241bdccadc1a151d0762d9ddb1fb7cec10b021eee32e0be1d85552f81e669442
    Image:          mindnhand/kubia:v2
    Image ID:       docker-pullable://mindnhand/kubia@sha256:8400f1f571eae71e4e8a9f7e083361596fef8599c101f944cc8bf240a5ded2e9
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 05 Sep 2021 22:14:11 +0800
    Ready:          True
    Restart Count:  0
    Readiness:      http-get http://:8080/ delay=0s timeout=1s period=1s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6trp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-x6trp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  19m   default-scheduler  Successfully assigned default/kubia-deploy-canary-b49dcdcff-8t7wq to c7u6s8
  Normal  Pulling    19m   kubelet            Pulling image "mindnhand/kubia:v2"
  Normal  Pulled     18m   kubelet            Successfully pulled image "mindnhand/kubia:v2" in 35.210249383s
  Normal  Created    18m   kubelet            Created container kubia
  Normal  Started    18m   kubelet            Started container kubia
[root@c7u6s5:~]# kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubia-deploy-canary   3/3     3            3           19m

此时全部处于正常运行状态,pod资源以及Deployment控制器资源,都是Running状态。接下来查看下滚动更新状态,具体如下所示:

[root@c7u6s5:~]# kubectl rollout status deploy kubia-deploy-canary 
deployment "kubia-deploy-canary" successfully rolled out
[root@c7u6s5:~]# 

上述输出表示,初始创建完Deployment资源,还未执行过资源更新与发布。

接下来更新容器中所用的镜像,由于此前的Deployment资源定义文件中通过strategy字段指定了滚动更新过程中允许多出1个pod,即最多4个pod资源,所以在此处修改了容器中所用的镜像之后,会在当前的Deployment控制器资源下面创建出一个新的ReplicaSet控制器资源,并在这个rs控制器资源下面创建一个新的podi资源。具体如下所示:

[root@c7u6s5:~]# kubectl set image deploy kubia-deploy-canary kubia=mindnhand/kubia:v3 --record=true
deployment.apps/kubia-deploy-canary image updated
[root@c7u6s5:~]# kubectl get po
NAME                                   READY   STATUS              RESTARTS   AGE
kubia-deploy-canary-7449f84874-9bc8v   0/1     ContainerCreating   0          3s
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running             0          20m
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running             0          20m
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running             0          20m
kubia-deploy-canary-test               1/1     Running             0          10m
[root@c7u6s5:~]# kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
kubia-deploy-canary-7449f84874   1         1         0       8s
kubia-deploy-canary-b49dcdcff    3         3         3       20m
[root@c7u6s5:~]# kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubia-deploy-canary   3/3     1            3           20m
[root@c7u6s5:~]# kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
kubia-deploy-canary-7449f84874   1         1         0       19s
kubia-deploy-canary-b49dcdcff    3         3         3       20m
[root@c7u6s5:~]# kubectl get po
NAME                                   READY   STATUS              RESTARTS   AGE
kubia-deploy-canary-7449f84874-9bc8v   0/1     ContainerCreating   0          26s
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running             0          20m
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running             0          20m
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running             0          20m
kubia-deploy-canary-test               1/1     Running             0          11m
[root@c7u6s5:~]# 

上述过程中,通过kubectl set image命令更新了pod中容器的镜像。新创建出来的pod资源需要拉取更新后的镜像并启动,创建新的容器。等待容器启动,具体如下所示:

[root@c7u6s5:~]# kubectl describe po kubia-deploy-canary-7449f84874-9bc8v 
Name:         kubia-deploy-canary-7449f84874-9bc8v
Namespace:    default
Priority:     0
Node:         c7u6s7/192.168.122.26
Start Time:   Sun, 05 Sep 2021 22:34:06 +0800
Labels:       app=kubia
              pod-template-hash=7449f84874
              rtype=pod
Annotations:  cni.projectcalico.org/containerID: 77953892c21d0d574b45802f2e2d3bc105c1493729bec24a26e36bd264c6818e
              cni.projectcalico.org/podIP: 10.244.227.186/32
              cni.projectcalico.org/podIPs: 10.244.227.186/32
Status:       Pending
IP:           10.244.227.186
IPs:
  IP:           10.244.227.186
Controlled By:  ReplicaSet/kubia-deploy-canary-7449f84874
Containers:
  kubia:
    Container ID:   
    Image:          mindnhand/kubia:v3
    Image ID:       
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:8080/ delay=0s timeout=1s period=1s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c5w75 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-c5w75:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  34s   default-scheduler  Successfully assigned default/kubia-deploy-canary-7449f84874-9bc8v to c7u6s7
  Normal   Pulling    33s   kubelet            Pulling image "mindnhand/kubia:v3"
  Warning  Failed     4s    kubelet            Failed to pull image "mindnhand/kubia:v3": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1
.docker.io/v2/mindnhand/kubia/manifests/sha256:98fde81970cbbf4bf3075b391272eaf64dc5d665a56355739c67579256a9839e: net/http: TLS handshake timeout
  Warning  Failed     4s    kubelet            Error: ErrImagePull
  Normal   BackOff    3s    kubelet            Back-off pulling image "mindnhand/kubia:v3"
  Warning  Failed     3s    kubelet            Error: ImagePullBackOff
[root@c7u6s5:~]# 
[root@c7u6s5:~]# kubectl get po
NAME                                   READY   STATUS             RESTARTS   AGE
kubia-deploy-canary-7449f84874-9bc8v   0/1     ImagePullBackOff   0          100s
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running            0          22m
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running            0          22m
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running            0          22m
kubia-deploy-canary-test               1/1     Running            0          12m
[root@c7u6s5:~]# kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
kubia-deploy-canary-7449f84874-9bc8v   1/1     Running   0          101s
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running   0          22m
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running   0          22m
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running   0          22m
kubia-deploy-canary-test               1/1     Running   0          12m
[root@c7u6s5:~]# 

上述显示,新创建的pod资源已经运行起来了,由于镜像本身设置了只能处理前4个请求,而且就绪状态探针每隔1秒就会对该pod资源进行探测,所以稍后应该可以看到该pod资源处于未就绪状态,具体如下所示:

[root@c7u6s5:~]# kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
kubia-deploy-canary-7449f84874-9bc8v   0/1     Running   0          111s
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running   0          22m
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running   0          22m
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running   0          22m
kubia-deploy-canary-test               1/1     Running   0          12m
[root@c7u6s5:~]# kubectl describe pod kubia-deploy-canary-7449f84874-9bc8v 
Name:         kubia-deploy-canary-7449f84874-9bc8v
Namespace:    default
Priority:     0
Node:         c7u6s7/192.168.122.26
Start Time:   Sun, 05 Sep 2021 22:34:06 +0800
Labels:       app=kubia
              pod-template-hash=7449f84874
              rtype=pod
Annotations:  cni.projectcalico.org/containerID: 77953892c21d0d574b45802f2e2d3bc105c1493729bec24a26e36bd264c6818e
              cni.projectcalico.org/podIP: 10.244.227.186/32
              cni.projectcalico.org/podIPs: 10.244.227.186/32
Status:       Running
IP:           10.244.227.186
IPs:
  IP:           10.244.227.186
Controlled By:  ReplicaSet/kubia-deploy-canary-7449f84874
Containers:
  kubia:
    Container ID:   docker://5a3eb4068048dede442c83815aa8ee3ac34a4238881844f6a046924c810cbfb5
    Image:          mindnhand/kubia:v3
    Image ID:       docker-pullable://mindnhand/kubia@sha256:98fde81970cbbf4bf3075b391272eaf64dc5d665a56355739c67579256a9839e
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 05 Sep 2021 22:35:46 +0800
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:8080/ delay=0s timeout=1s period=1s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c5w75 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-c5w75:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  2m5s                default-scheduler  Successfully assigned default/kubia-deploy-canary-7449f84874-9bc8v to c7u6s7
  Warning  Failed     95s                 kubelet            Failed to pull image "mindnhand/kubia:v3": rpc error: code = Unknown desc = Error response from daemon: Get http
s://registry-1.docker.io/v2/mindnhand/kubia/manifests/sha256:98fde81970cbbf4bf3075b391272eaf64dc5d665a56355739c67579256a9839e: net/http: TLS handshake timeout
  Warning  Failed     95s                 kubelet            Error: ErrImagePull
  Normal   BackOff    94s                 kubelet            Back-off pulling image "mindnhand/kubia:v3"
  Warning  Failed     94s                 kubelet            Error: ImagePullBackOff
  Normal   Pulling    82s (x2 over 2m4s)  kubelet            Pulling image "mindnhand/kubia:v3"
  Normal   Pulled     25s                 kubelet            Successfully pulled image "mindnhand/kubia:v3" in 56.048181599s
  Normal   Created    25s                 kubelet            Created container kubia
  Normal   Started    25s                 kubelet            Started container kubia
  Warning  Unhealthy  6s (x16 over 21s)   kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500
[root@c7u6s5:~]# kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
kubia-deploy-canary-7449f84874-9bc8v   0/1     Running   0          2m11s
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running   0          22m
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running   0          22m
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running   0          22m
kubia-deploy-canary-test               1/1     Running   0          12m
[root@c7u6s5:~]# 

上述输出显示,pod资源未处于运行状态,并且提示就绪状态探针失败,虽然pod的状态仍然是Running,但是Ready部分是0,即未处于就绪状态。

此即金丝雀发布,更新的应用由于有问题,被阻止了继续的更新,防止了有问题的应用版本替换掉已经正常运行的应用版本。此时正常的应用版本仍然处于正常运行状态,并未受到影响。

2.4. 回滚新版应用

此时查看Deployment控制器资源的滚动更新状态,具体如下所示:

[root@c7u6s5:~]# kubectl rollout status deploy kubia-deploy-canary 
Waiting for deployment "kubia-deploy-canary" rollout to finish: 1 out of 3 new replicas have been updated...

^C[root@c7u6s5:~]# 

提示有1个复制没有更新完成,被阻塞了。接下来查看滚动更新历史,具体如下所示:

[root@c7u6s5:~]# kubectl rollout history deploy kubia-deploy-canary 
deployment.apps/kubia-deploy-canary 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=canary_release.yml --record=true
2         kubectl set image deploy kubia-deploy-canary kubia=mindnhand/kubia:v3 --record=true
[root@c7u6s5:~]# 

上述输出显示有两个历史版本,第一个是起初创建该Deployment控制器资源的时候创建的命令;第二条是后面滚动更新的时候(金丝雀发布)模式下更新pod中容器的镜像的命令。

此时可以将上述的更新操作进行回滚,具体如下所示:

[root@c7u6s5:~]# kubectl rollout undo deploy kubia-deploy-canary --to-revision=1
deployment.apps/kubia-deploy-canary rolled back
[root@c7u6s5:~]# 

上述就实现了已经更新的应用的回滚。查看pod资源的状态,具体如下所示:

[root@c7u6s5:~]# kubectl get po
NAME                                   READY   STATUS        RESTARTS   AGE
kubia-deploy-canary-7449f84874-9bc8v   0/1     Terminating   0          10m
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running       0          31m
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running       0          31m
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running       0          31m
kubia-deploy-canary-test               1/1     Running       0          21m
[root@c7u6s5:~]# kubectl get po
NAME                                  READY   STATUS    RESTARTS   AGE
kubia-deploy-canary-b49dcdcff-8t7wq   1/1     Running   0          49m
kubia-deploy-canary-b49dcdcff-l2p86   1/1     Running   0          49m
kubia-deploy-canary-b49dcdcff-qvsk8   1/1     Running   0          49m
kubia-deploy-canary-test              1/1     Running   0          39m
[root@c7u6s5:~]# 
[root@c7u6s5:09.RollingUpdate]# kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubia-deploy-canary   3/3     3            3           19h
[root@c7u6s5:09.RollingUpdate]# kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
kubia-deploy-canary-7449f84874   0         0         0       18h
kubia-deploy-canary-b49dcdcff    3         3         3       19h
[root@c7u6s5:09.RollingUpdate]# 

上述回滚操作执行完成,查看下滚动历史,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl rollout history deploy kubia-deploy-canary 
deployment.apps/kubia-deploy-canary 
REVISION  CHANGE-CAUSE
2         kubectl set image deploy kubia-deploy-canary kubia=mindnhand/kubia:v3 --record=true
3         kubectl apply --filename=canary_release.yml --record=true
[root@c7u6s5:09.RollingUpdate]# 

上述输出中可以看出,REVISION的数值随着滚动在增加,同时最初的数值消失了。

2.5. 金丝雀发布正常版本的新应用

此处使用正常实现功能的v4版本docker镜像更新Deployment控制器资源,由于是正常处理请求的,所以在容器启动之后,就绪状态探测器持续探测10秒之后,仍然能保持就绪状态,所以也就可以顺利完成pod资源的滚动更新。

使用v4版本的docker镜像更新Deployment控制器资源,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl set image deploy kubia-deploy-canary kubia=mindnhand/kubia:v4 --record=true
deployment.apps/kubia-deploy-canary image updated
[root@c7u6s5:09.RollingUpdate]# 

查看更新后的Deployment控制器资源、ReplicaSet控制器资源以及pod资源的状态,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubia-deploy-canary   3/3     1            3           19h
[root@c7u6s5:09.RollingUpdate]# kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
kubia-deploy-canary-66c7f8db9c   1         1         0       14s
kubia-deploy-canary-7449f84874   0         0         0       19h
kubia-deploy-canary-b49dcdcff    3         3         3       19h
[root@c7u6s5:09.RollingUpdate]# kubectl get po
NAME                                   READY   STATUS              RESTARTS   AGE
kubia-deploy-canary-66c7f8db9c-d7vlw   0/1     ContainerCreating   0          21s
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running             0          19h
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running             0          19h
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running             0          19h
kubia-deploy-canary-test               1/1     Running             0          19h
[root@c7u6s5:09.RollingUpdate]# 
[root@c7u6s5:09.RollingUpdate]# kubectl get po
NAME                                   READY   STATUS         RESTARTS   AGE
kubia-deploy-canary-66c7f8db9c-d7vlw   0/1     ErrImagePull   0          99s
kubia-deploy-canary-b49dcdcff-8t7wq    1/1     Running        0          19h
kubia-deploy-canary-b49dcdcff-l2p86    1/1     Running        0          19h
kubia-deploy-canary-b49dcdcff-qvsk8    1/1     Running        0          19h
kubia-deploy-canary-test               1/1     Running        0          19h
[root@c7u6s5:09.RollingUpdate]# kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
kubia-deploy-canary-66c7f8db9c-9hkjb   1/1     Running   0          105s
kubia-deploy-canary-66c7f8db9c-d7vlw   1/1     Running   0          3m41s
kubia-deploy-canary-66c7f8db9c-trk25   1/1     Running   0          94s
kubia-deploy-canary-test               1/1     Running   0          19h
[root@c7u6s5:09.RollingUpdate]# 

上述地一个pod启动的时候,由于需要拉取镜像,所以更新过程持续的时间稍微长一些。通过逐个滚动更新,最终会实现对旧版的3个pod资源进行替换,逐个全部更新为新版的pod资源。从上述输出的AGE字段中的时间也可以看出,是逐个更新完成的。

更新完成之后,查看下Deployment控制器资源以及ReplicaSet控制器资源,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubia-deploy-canary   3/3     3            3           19h
[root@c7u6s5:09.RollingUpdate]# kubectl get rs
NAME                             DESIRED   CURRENT   READY   AGE
kubia-deploy-canary-66c7f8db9c   3         3         3       13m
kubia-deploy-canary-7449f84874   0         0         0       19h
kubia-deploy-canary-b49dcdcff    0         0         0       19h
[root@c7u6s5:09.RollingUpdate]# 

此时可以看出新创建了一个ReplicaSet控制器,并且接管了所有新的pod资源。

查看下滚动历史,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl rollout history deploy kubia-deploy-canary 
deployment.apps/kubia-deploy-canary 
REVISION  CHANGE-CAUSE
2         kubectl set image deploy kubia-deploy-canary kubia=mindnhand/kubia:v3 --record=true
3         kubectl apply --filename=canary_release.yml --record=true
4         kubectl set image deploy kubia-deploy-canary kubia=mindnhand/kubia:v4 --record=true
[root@c7u6s5:09.RollingUpdate]# 

上述操作中,分别记录了各个REVISION所对应的操作中使用的镜像版本,很清晰。

上述设置的最小就绪时间比较段,只有10秒,如果要看出更明显的逐个更新的过程,可以将minReadySeconds的值设置的更大一些即可。

在此前创建的临时pod资源中,通过此前创建的Service资源访问目标pod中提供的服务,具体如下所示:

[root@c7u6s5:09.RollingUpdate]# kubectl get svc
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes                ClusterIP   10.96.0.1        <none>        443/TCP   34d
kubia-deploy-canary-svc   ClusterIP   10.100.138.230   <none>        80/TCP    20h
[root@c7u6s5:09.RollingUpdate]# 
[root@c7u6s5:09.RollingUpdate]# kubectl exec -it kubia-deploy-canary-test -- bash
bash-5.1# curl http://10.100.138.230:80
This is v4 running in pod kubia-deploy-canary-66c7f8db9c-d7vlw
bash-5.1# curl http://10.100.138.230:80
This is v4 running in pod kubia-deploy-canary-66c7f8db9c-trk25
bash-5.1# curl http://10.100.138.230:80
This is v4 running in pod kubia-deploy-canary-66c7f8db9c-d7vlw
bash-5.1# 

上述即为金丝雀发布中发布有问题的新版应用以及正常的新版应用的示例过程。

3. References

[1]. The Story of the Real Canary in the Coal Mine
[2]. Canary Deployments
[3]. A Comprehensive Guide to Canary Releases

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值