k8s-滚动更新与健康检查

更新与回滚

滚动更新

滚动更新一次只更新一小部分副本,成功后再更新更多的副本,最终完成所有副本,最大的好处是零停机。

实例:部署httpd2.2更新到2.4

下载镜像,所有节点均下载

docker pull httpd:2.2
docker pull httpd:2.4

编辑yml文件

mkdir rolingup
cd rolingup/
vim httpd-deploy.yml


apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
  labels:
    run: httpd
spec:
  replicas: 3
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: httpd
        image: httpd:2.2
        ports:
        - containerPort: 80

运行

kubectl apply -f httpd-deploy.yml

查看
此时镜像版本为2.2

kubectl get deployments.apps httpd -o wide
	NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES      SELECTOR
httpd   3/3     3            3           17s   httpd        httpd:2.2   run=httpd

查看pod

kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
httpd-74d5cc74db-htrkb   1/1     Running   0          2m40s   10.244.3.89    node2   <none>           <none>
httpd-74d5cc74db-wdgb4   1/1     Running   0          2m40s   10.244.1.104   node1   <none>           <none>
httpd-74d5cc74db-xflx4   1/1     Running   0          2m40s   10.244.1.103   node1   <none>           <none>

修改yml文件,镜像更改为httpd:2.4

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
  labels:
    run: httpd
spec:
  replicas: 3
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: httpd
        image: httpd:2.4
        ports:
        - containerPort: 80

再次运行yml

kubectl apply -f httpd-deploy.yml 
deployment.apps/httpd configured

查看
更新到了2.4

kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES      SELECTOR
httpd   3/3     3            3           82s   httpd        httpd:2.4   run=httpd

查看详情

kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES      SELECTOR
httpd   3/3     3            3           82s   httpd        httpd:2.4   run=httpd
[root@master rolingup]# kubectl describe deployments.apps httpd 
Name:                   httpd
Namespace:              default
CreationTimestamp:      Wed, 22 Jul 2020 10:44:52 +0800
Labels:                 run=httpd
Annotations:            deployment.kubernetes.io/revision: 2
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"run":"httpd"},"name":"httpd","namespace":"default"},"s...
Selector:               run=httpd
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=httpd
  Containers:
   httpd:
    Image:        httpd:2.4
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   httpd-74d5cc74db (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  117s  deployment-controller  Scaled up replica set httpd-cd94c9b6d to 3
  Normal  ScalingReplicaSet  61s   deployment-controller  Scaled up replica set httpd-74d5cc74db to 1
  Normal  ScalingReplicaSet  60s   deployment-controller  Scaled down replica set httpd-cd94c9b6d to 2
  Normal  ScalingReplicaSet  60s   deployment-controller  Scaled up replica set httpd-74d5cc74db to 2
  Normal  ScalingReplicaSet  59s   deployment-controller  Scaled down replica set httpd-cd94c9b6d to 1
  Normal  ScalingReplicaSet  59s   deployment-controller  Scaled up replica set httpd-74d5cc74db to 3
  Normal  ScalingReplicaSet  58s   deployment-controller  Scaled down replica set httpd-cd94c9b6d to 0

回滚

kubectl apply 每次更新应用时会记录下当前配置,保存为一个revision,这样就可以回滚到某个特定的revision

实例:httpd指定回滚

下载镜像:

docker pull httpd:2.4.37
docker pull httpd:2.4.38
docker pull httpd:2.4.39

编辑三个yml文件分别为v1、v2、v3对应三个版本httpd

cp httpd-deploy.yml httpd-v1.yml
cp httpd-v1.yml httpd-v2.yml 
cp httpd-v1.yml httpd-v3.yml 

编辑yml文件,更改镜像版本

vim httpd-v1.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
  labels:
    run: httpd
spec:
  replicas: 3
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: httpd
        image: httpd:2.4.37
        ports:
        - containerPort: 80

v2与v3均是image哪里更改对应的httpd版本镜像

v2

...
    spec:
  containers:
  - name: httpd
    image: httpd:2.4.38
    ports:
    - containerPort: 80



v3

...
    spec:
  containers:
  - name: httpd
    image: httpd:2.4.39
    ports:
    - containerPort: 80

运行并查看版本

v1

kubectl apply -f httpd-v1.yml --record

##--record(必选项)	把每次的更新写到revision中 这样就知道每条revision对应哪个文件方便回滚

kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           9m34s   httpd        httpd:2.4.37   run=httpd

v2

kubectl apply -f httpd-v2.yml --record

kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           11m   httpd        httpd:2.4.38   run=httpd

v3

kubectl apply -f httpd-v3.yml --record

kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           11m   httpd        httpd:2.4.39   run=httpd

查看revision

 kubectl rollout history deployment httpd

kubectl rollout history deployment httpd
deployment.apps/httpd 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         kubectl apply --filename=httpd-v1.yml --record=true
4         kubectl apply --filename=httpd-v2.yml --record=true
5         kubectl apply --filename=httpd-v3.yml --record=true

指定回滚到v1的版本,httpd版本为2.4.37

kubectl rollout undo deployment httpd --to-revision=3
deployment.apps/httpd rolled back

##--to-revision=   revision历史记录的序列号

查看

kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     2            3           15m   httpd        httpd:2.4.37   run=httpd

Health Check

自愈能力是Kubernetes一个强大的特性,自愈的默认实现方式是自动重启发生故障的容器,除此之外,还可以利用Liveness和Readiness探测机制设置更精细的健康检查。

有了这个强大的特性可以实现强大的需求:

  • 零停机部署
  • 避免部署无效镜像
  • 更加安全的滚动升级

默认的健康检查

每个容器启动的时候都会执行一个进程,由Dockerfile的CMD或者ENTRYPOINT指定。程序正常退出的时候会返回一个为零的返回码,当返回码非零的时候则认为容器发生故障,这个时候Kubernetes可以配置重启策略,进行容器重启。

实例

模拟一个故障发生场景,启动一个busybox应用,每隔10秒退出一次,由于容器不是正常退出进程返回值非零,Kubernetes会认为容器发生了故障,需要重启(将restartPolicy设置为OnFailure,失败时重启。默认为Always)

mkdir health
cd health/
vim health.yml


apiVersion: v1
kind: Pod
metadata:
  labels:
    test: healthcheck
  name: healthcheck
spec:
  restartPolicy: OnFailure
  containers:
  - name: healthcheck
    image: busybox
    args:
    - /bin/sh
    - -c
    - sleep 10;exit 1

运行

kubectl apply -f health.yml
pod/healthcheck created   

过一段时间查看pod

已经重启两次

kubectl get pod
NAME          READY   STATUS   RESTARTS   AGE
healthcheck   0/1     Error    2          107s

Liveness探测

在很多情况下应用发生故障但是进程不会退出。比如访问Web服务器时显示500内部错误,可能是系统超载,也可能是资源死锁,此时httpd进程并没有异常退出,这种情况下重启容器可能是最直接和最有效的办法。这种情况可以使用Liveness来处理

实例

Liveness探测可以自定义判断容器是否健康的条件。如果探测失败,自动重启容器。

vim livenss.yml

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness
spec:
  restartPolicy: OnFailure
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 10				##指定开始探测时间为10秒后
      periodSeconds: 5						##指定探测间隔时间

启动一个busybox应用,再busybox中创建文件/tmp/healthy,30秒之后删除。之后等待600秒。

liveness的探测机制为cat /tmp/healthy文件(initialDelaySeconds指定10秒之后开始执行探测,periodSeconds指定每隔5秒执行一次,Kubernetes如果3次探测失败,则会重启容器)

运行

kubectl apply -f livenss.yml 
pod/liveness created

使用watch 实时查看

watch kubectl get pod

Every 2.0s: kubectl get pod                                                                    Wed Jul 22 11:17:53 2020

NAME          READY   STATUS             RESTARTS   AGE
healthcheck   0/1     CrashLoopBackOff   6          12m
liveness      1/1     Running            3          6m11s

可以看到重启了3次

Readiness探测

Liveness告诉Kubernetes什么时候重启容器自愈;Readiness探测则是告诉Kubernetes什么时候可以将容器加入到Service负载均衡池中,对外提供服务。

实例

readiness配置方法与liveness基本相同

vim readiness.yml 	

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: readiness
  name: readiness
spec:
  restartPolicy: OnFailure
  containers:
  - name: readiness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 600
    readinessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 10
      periodSeconds: 5

运行

kubectl apply -f readiness.yml 
pod/readiness created

查看Pod(刚创建时不可用,19秒的时候探测成功设置为可用,之后会连续探测3次,均失败后设置为不可用)

kubectl get pod readiness
NAME        READY   STATUS    RESTARTS   AGE
readiness   0/1     Running   0          89s

查看详情

kubectl describe pod readiness
Name:         readiness
Namespace:    default
Priority:     0
Node:         node2/192.168.1.30
Start Time:   Wed, 22 Jul 2020 11:22:14 +0800
Labels:       test=readiness
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"readiness"},"name":"readiness","namespace":"default"},"spec...
Status:       Running
IP:           10.244.3.101
IPs:
  IP:  10.244.3.101
Containers:
  readiness:
    Container ID:  docker://f6322cfd82f9b32d90e6a4f59687a8d300fdbe3edabe489ee9430435a4141868
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 600
    State:          Running
      Started:      Wed, 22 Jul 2020 11:22:31 +0800
    Ready:          False
    Restart Count:  0
    Readiness:      exec [cat /tmp/healthy] delay=10s timeout=1s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hvq4p (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-hvq4p:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hvq4p
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  <unknown>          default-scheduler  Successfully assigned default/readiness to node2
  Normal   Pulling    119s               kubelet, node2     Pulling image "busybox"
  Normal   Pulled     104s               kubelet, node2     Successfully pulled image "busybox"
  Normal   Created    104s               kubelet, node2     Created container readiness
  Normal   Started    103s               kubelet, node2     Started container readiness
  Warning  Unhealthy  1s (x15 over 71s)  kubelet, node2     Readiness probe failed: cat: can't open '/tmp/healthy': No such file or directory

Liveness 是探测失败后重启pod
Readiness是探测失败后把pod标记为不可用

Scale up使用health check

在scale up中使用健康检查可以防止请求发送给没有ready的pod

实例

vim health-httpd.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
  labels:
    run: httpd
spec:
  replicas: 3
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: httpd
        image: httpd:2.4
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            scheme: HTTP
            path: /root/healthy
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 5
--- 				##分割线
apiVersion: v1
kind: Service
metadata:
  name: httpd-svc
spec:
  selector:
    run: httpd
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 80

运行

kubectl apply -f health-httpd.yml 
deployment.apps/httpd created
service/httpd-svc created

由于没有这个网页所以他会不接受service的请求 ,每隔五秒探测一次 直到有这个网页为止 即curl ip:80/aa 返回200-400之间即为成功

查看

kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP             NODE    NOMINATED NODE   READINESS GATES
httpd-8bcf44865-4kxfn   0/1     Running   0          3m8s   10.244.1.117   node1   <none>           <none>
httpd-8bcf44865-7f4gz   0/1     Running   0          3m8s   10.244.3.103   node2   <none>           <none>
httpd-8bcf44865-k6ph2   0/1     Running   0          3m8s   10.244.3.102   node2   <none>           <none>

滚动更新中使用health check

滚动更新时,Kubernetes会启动新副本逐渐替换旧副本,但是当启动新副本的时候会出现如下问题:

  • 新副本在启动成功之前无法相应业务需求
  • 由于配置错误导致新副本无法完成准备工作(比如无法链接缓存或者数据库)

由于新副本没有异常退出,默认的Health Check机制会认为容器已经就绪,进而会逐渐用新副本替换现有副本,最后当所有旧副本被替换之后会导致系统不可使用。

配置Readiness探测,新副本只有通过了检查才会被添加到Service

实例

vim health-roll.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
  labels:
    run: httpd
spec:
  replicas: 10
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: busy
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep10;touch /tmp/healthy;sleep 30000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

运行

kubectl apply -f health-roll.yml 
deployment.apps/httpd created

由于存在/tmp/healthy文件,10秒之后Readiness探测成功,部署成功

kubectl get deployments.apps httpd 
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
httpd   10/10   10           10          96s

kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
httpd-5f68d89dff-29jgj   1/1     Running   0          113s
httpd-5f68d89dff-7wtlh   1/1     Running   0          113s
httpd-5f68d89dff-95z8d   1/1     Running   0          113s
httpd-5f68d89dff-bphrx   1/1     Running   0          113s
httpd-5f68d89dff-d8ddg   1/1     Running   0          113s
httpd-5f68d89dff-gxqlh   1/1     Running   0          113s
httpd-5f68d89dff-jpvjv   1/1     Running   0          113s
httpd-5f68d89dff-r8xng   1/1     Running   0          113s
httpd-5f68d89dff-wg4c9   1/1     Running   0          113s
httpd-5f68d89dff-x8cl7   1/1     Running   0          113s

再次创建一个文件

vim health-roll-v1.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
  labels:
    run: httpd
spec:
  replicas: 10
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: busy
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 30000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

运行

kubectl apply -f health-roll-v1.yml --record 
deployment.apps/httpd configured

不创建文件,探测时无法通过

kubectl get deployments.apps httpd
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
httpd   8/10    5            8           6m42s

更新完成的是5个 活动的是8个也就是旧副本是8个 总数是13个
发生这种情况的是因为

maxSurge和maxUnavailable (默认为25%)

回滚到上个版本

kubectl rollout undo deployment httpd --to-revision=1
deployment.apps/httpd rolled back

查看

kubectl get deployments.apps httpd
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
httpd   10/10   10           10          11m

手动设置 maxSurge和maxUnavailable

maxSurge:
控制副本更新时超过DESIRED即副本的 READY总数 向上取整数
所以此次实例为 10+10*25%

maxUnavailable:
控制副本更新时不可用的个数 向下取整
所以此次实例为 10-10*25% 得出的结果为副本的可用个数

vim health-roll-v2.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
  labels:
    run: httpd
spec:
  strategy:
    rollingUpdate:
      maxSurge: 35%
      maxUnavailable: 35%
  replicas: 10
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: busy
        image: busybox
        args:
        - /bin/sh
        - -c
        - sleep 30000
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10
          periodSeconds: 5

运行

kubectl apply -f health-roll-v2.yml --record 
deployment.apps/httpd configured

查看

kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES    SELECTOR
httpd   7/10    7            7           13m   busy         busybox   run=httpd

可以看到更新完成的有7个
即 10+1050%=15个总数
10-10
35%=7个可用(向下取整)

kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
httpd-5f68d89dff-29jgj   1/1     Running   0          15m
httpd-5f68d89dff-7wtlh   1/1     Running   0          15m
httpd-5f68d89dff-gxqlh   1/1     Running   0          15m
httpd-5f68d89dff-jpvjv   1/1     Running   0          15m
httpd-5f68d89dff-r8xng   1/1     Running   0          15m
httpd-5f68d89dff-wg4c9   1/1     Running   0          15m
httpd-5f68d89dff-x8cl7   1/1     Running   0          15m
httpd-94f7d5684-4sjcl    0/1     Running   0          2m41s
httpd-94f7d5684-6t5tk    0/1     Running   0          2m41s
httpd-94f7d5684-8thnp    0/1     Running   0          2m41s
httpd-94f7d5684-kqk75    0/1     Running   0          2m41s
httpd-94f7d5684-npwvh    0/1     Running   0          2m41s
httpd-94f7d5684-pfvdw    0/1     Running   0          2m41s
httpd-94f7d5684-s88l2    0/1     Running   0          2m41s
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Kubernetes中,我们可以通过滚动更新来实现零停机不停服发布服务。滚动更新是一种逐步替换Pod的方法,它允许我们在不停机的情况下将新版本的应用程序逐步引入生产环境。 以下是实现零停机不停服发布服务的步骤: 1. 创建Deployment资源 首先,我们需要创建一个Deployment资源,用于管理我们的应用程序。Deployment资源可以自动创建和管理Pod,确保我们的应用程序在任何时候都有指定数量的Pod在运行。 2. 更新应用程序镜像 接下来,我们需要更新应用程序的镜像,以便将新版本的应用程序引入生产环境。我们可以通过修改Deployment资源的镜像字段来更新应用程序镜像。 3. 逐步替换Pod 当我们更新Deployment资源的镜像字段时,Kubernetes会自动创建新的Pod,并逐步替换旧的Pod。默认情况下,每次更新会替换25%的Pod,直到所有Pod都被替换为止。 4. 检查更新状态 在进行滚动更新时,我们可以使用kubectl rollout status命令来检查更新状态。该命令将显示Deployment更新进度,以及新旧Pod的状态。 5. 回滚更新 如果在滚动更新过程中发生了问题,我们可以使用kubectl rollout undo命令来回滚更新。该命令将自动将Deployment回滚到上一个版本,并重新创建旧版本的Pod。 通过上述步骤,我们可以实现零停机不停服发布服务,确保我们的应用程序在更新过程中始终可用。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值