kubernetes1.8版本使用delpoyment版本升级回滚

一个文件实现两个版本之间回滚升级

kubectl apply是1.8以上支持

我以测试环境中tmanager 服务做个例子
文件 tmanager.yaml 我仅修改image,所以文件我就放修改的那部分

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tmanager
  namespace: paas
  labels:
    app: tmanager
spec:
  replicas: 2
  selector:
     matchLabels:
        app:  tmanager
  template:
    metadata:
      labels:
        app: tmanager
    spec:
      containers:
      - name: tmanager
        image: registry:5000/wecloud/new-tm:1.2.4.33-1
        imagePullPolicy: Always
        env:
          - name: ImageName
            valueFrom:
              configMapKeyRef:
                name: wellcloud-configmap
                key: wellcloud_config_web_tmanager_imageName
   

先启动这个deployment
kubectl apply -f tmanager.yaml

[root@kube-m ~]# kubectl get pod --all-namespaces | grep tman
paas          tmanager-5784b4f9cb-qxfjt               1/1     Running                      0          29m
paas          tmanager-5784b4f9cb-sd2hd               1/1     Running                      0          29m

查看pod具体信息记下镜像

[root@kube-m ~]# kubectl describe pod tmanager-5784b4f9cb-sd2hd -n paas
Name:               tmanager-5784b4f9cb-sd2hd
Namespace:          paas
Priority:           0
PriorityClassName:  <none>
Node:               kube-node7/192.168.40.112
Start Time:         Fri, 12 Jul 2019 17:01:10 +0800
Labels:             app=tmanager
                    pod-template-hash=5784b4f9cb
Annotations:        <none>
Status:             Running
IP:                 172.17.89.10
Controlled By:      ReplicaSet/tmanager-5784b4f9cb
Containers:
  tmanager:
    Container ID:   docker://219d01489647743660be27cc317c362430745b0a11ab5c77dc1b9b1b44f94c31
    Image:          registry:5000/wecloud/new-tm:1.2.4.33-1
    Image ID:       docker-pullable://registry:5000/wecloud/new-tm@sha256:88a057058b02cf07f0bee3cadcd949180c0e545b1619857020e577c688
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 12 Jul 2019 17:01:13 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      ImageName:        <set to the key 'wellcloud_config_web_tmanager_imageName' of config map 'wellcloud-configmap'>        Option
      TM_CLIENT:        <set to the key 'wellcloud_config_web_tmanager_tmClient' of config map 'wellcloud-configmap'>         Option
      CLIENT_ENV:       <set to the key 'wellcloud_config_web_tmanager_clientEnv' of config map 'wellcloud-configmap'>        Option
      hidePhoneNumber:  <set to the key 'wellcloud_config_web_tmanager_hidePhoneNumber' of config map 'wellcloud-configmap'>  Option
      NODE_ENV:         <set to the key 'wellcloud_config_web_tmanager_nodeEnv' of config map 'wellcloud-configmap'>          Option
      ivrUrl:           <set to the key 'wellcloud_config_web_tmanager_ivrUrl' of config map 'wellcloud-configmap'>           Option
      ocm_host:         <set to the key 'wellcloud_config_web_tmanager_ocmHost' of config map 'wellcloud-configmap'>          Option
      ocm_port:         <set to the key 'wellcloud_config_web_tmanager_ocmPort' of config map 'wellcloud-configmap'>          Option
    Mounts:
      /home/listen/Apps/logs from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l8lpx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  logs:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  paas-log-nfs-pvc
    ReadOnly:   false
  default-token-l8lpx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-l8lpx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulling  35s   kubelet, kube-node7  pulling image "registry:5000/wecloud/new-tm:1.2.4.33-1"
  Normal  Pulled   35s   kubelet, kube-node7  Successfully pulled image "registry:5000/wecloud/new-tm:1.2.4.33-1"
  Normal  Created  35s   kubelet, kube-node7  Created container
  Normal  Started  35s   kubelet, kube-node7  Started container

修改下镜像
tmanager.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tmanager
  namespace: paas
  labels:
    app: tmanager
spec:
  replicas: 2
  selector:
     matchLabels:
        app:  tmanager
  template:
    metadata:
      labels:
        app: tmanager
    spec:
      containers:
      - name: tmanager
        image: registry:5000/wecloud/new-tm:1.2.4.33
        #image: registry:5000/wecloud/new-tm:1.2.3.213
        imagePullPolicy: Always

滚动升级:
kubectl apply -f tmanager.yaml

kubectl apply -f  tmanager.yaml 
service/tmanager unchanged
deployment.apps/tmanager configured

因为文件是delpoyment和service写在一起的,我只是修改了delpoyment的镜像,service并没有修改

查看升级后

[root@kube-m ~]# kubectl get pod --all-namespaces | grep tman
paas          tmanager-5784b4f9cb-qxfjt               1/1     Running                      0          29m
paas          tmanager-5784b4f9cb-sd2hd               1/1     Running                      0          29m
[root@kube-m ~]# kubectl get pod --all-namespaces | grep tman
paas          tmanager-7fc7685ddb-4ztx4               1/1     Running                      0          27s
paas          tmanager-7fc7685ddb-p6smx               1/1     Running                      0          25s

详细信息:

Name:               tmanager-7fc7685ddb-p6smx
Namespace:          paas
Priority:           0
PriorityClassName:  <none>
Node:               kube-node7/192.168.40.112
Start Time:         Fri, 12 Jul 2019 17:43:30 +0800
Labels:             app=tmanager
                    pod-template-hash=7fc7685ddb
Annotations:        <none>
Status:             Running
IP:                 172.17.89.10
Controlled By:      ReplicaSet/tmanager-7fc7685ddb
Containers:
  tmanager:
    Container ID:   docker://536f35b3fbd19674f2e8b12916f4a94fd977b858f7142341a8bcb8570f53de4c
    Image:          registry:5000/wecloud/new-tm:1.2.4.33
    Image ID:       docker-pullable://registry:5000/wecloud/new-tm@sha256:88a057058b02cf07f0bee3cadcd949180c0e545b1619857020e577c6881e2876
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 12 Jul 2019 17:43:33 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      ImageName:        <set to the key 'wellcloud_config_web_tmanager_imageName' of config map 'wellcloud-configmap'>        Optional: false
      TM_CLIENT:        <set to the key 'wellcloud_config_web_tmanager_tmClient' of config map 'wellcloud-configmap'>         Optional: false
      CLIENT_ENV:       <set to the key 'wellcloud_config_web_tmanager_clientEnv' of config map 'wellcloud-configmap'>        Optional: false
      hidePhoneNumber:  <set to the key 'wellcloud_config_web_tmanager_hidePhoneNumber' of config map 'wellcloud-configmap'>  Optional: false
      NODE_ENV:         <set to the key 'wellcloud_config_web_tmanager_nodeEnv' of config map 'wellcloud-configmap'>          Optional: false
      ivrUrl:           <set to the key 'wellcloud_config_web_tmanager_ivrUrl' of config map 'wellcloud-configmap'>           Optional: false
      ocm_host:         <set to the key 'wellcloud_config_web_tmanager_ocmHost' of config map 'wellcloud-configmap'>          Optional: false
      ocm_port:         <set to the key 'wellcloud_config_web_tmanager_ocmPort' of config map 'wellcloud-configmap'>          Optional: false
    Mounts:
      /home/listen/Apps/logs from logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-l8lpx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  logs:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  paas-log-nfs-pvc
    ReadOnly:   false
  default-token-l8lpx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-l8lpx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulling  117s  kubelet, kube-node7  pulling image "registry:5000/wecloud/new-tm:1.2.4.33"
  Normal  Pulled   117s  kubelet, kube-node7  Successfully pulled image "registry:5000/wecloud/new-tm:1.2.4.33"
  Normal  Created  116s  kubelet, kube-node7  Created container
  Normal  Started  116s  kubelet, kube-node7  Started container

查看到镜像发生了改变,我期间一直在使用tmanager

回滚

kubectl rollout undo deployment tmanager -n paas
kubectl rollout undo deployment 服务名 -n 命名空间
我再修改一个没有镜像的yaml方便查看来回改变的情况

spec:
      containers:
      - name: tmanager
        image: registry:5000/wecloud/new-tm:1.2.4.33-111111
        #image: registry:5000/wecloud/new-tm:1.2.3.213
        imagePullPolicy: Always

kubectl apply -f tmanager.yaml

[root@kube-m ~]# kubectl get pod --all-namespaces | grep tman
paas          tmanager-5bf78cc467-nkw5d               0/1     ErrImagePull                 0          17s
paas          tmanager-7fc7685ddb-4ztx4               1/1     Running                      0          13m
paas          tmanager-7fc7685ddb-p6smx               1/1     Running                      0          13m

发现镜像没有新的起不了。
回滚 kubectl rollout undo deployment tmanager -n paas

[root@kube-m ~]# kubectl rollout undo deployment tmanager -n paas
deployment.extensions/tmanager rolled back
[root@kube-m ~]# kubectl get pod --all-namespaces | grep tman
paas          tmanager-7fc7685ddb-4ztx4               1/1     Running                      0          15m
paas          tmanager-7fc7685ddb-p6smx               1/1     Running                      0          14m

发现pod还在之前registry:5000/wecloud/new-tm:1.2.4.33的版本。然后我再次回滚

[root@kube-m ~]# kubectl rollout undo deployment tmanager -n paas
deployment.extensions/tmanager rolled back
您在 /var/spool/mail/root 中有邮件
[root@kube-m ~]# kubectl get pod --all-namespaces | grep tman
paas          tmanager-5bf78cc467-4p2pr               0/1     ErrImagePull                 0          3s
paas          tmanager-7fc7685ddb-4ztx4               1/1     Running                      0          16m
paas          tmanager-7fc7685ddb-p6smx               1/1     Running                      0          16m

它就会再最近的两次版本之前回滚,如果要回滚到指定的版本就需要revision(版次)这个参数
先写几个不同镜像的yaml文件
tmanager-11.yaml 镜像new-tm:1.2.4.33-11
tmanager-1.yaml 镜像new-tm:1.2.4.33-1
tmanager.yaml 镜像new-tm:1.2.4.33
kubectl delete -f tmanager.yaml我先把pod清了
然后依次执行升级:

kubectl apply -f tmanager.yaml --record
kubectl apply -f tmanager-1.yaml --record
kubectl apply -f tmanager-11.yaml --record

查看版本:

[root@kube-m tm-test]# kubectl rollout history deployment tmanager -n paas
deployment.extensions/tmanager 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=tmanager.yaml --record=true
2         kubectl apply --filename=tmanager-1.yaml --record=true
3         kubectl apply --filename=tmanager-11.yaml --record=true

回滚到第一个版本:

[root@kube-m tm-test]# kubectl rollout undo deployment tmanager -n paas --to-revision=1
deployment.extensions/tmanager rolled back

[root@kube-m ~]# kubectl get pod --all-namespaces | grep tman
paas          tmanager-5784b4f9cb-d5skh               1/1     Terminating                  0          5m28s
paas          tmanager-5784b4f9cb-zv2bw               1/1     Running                      0          5m32s
paas          tmanager-7fc7685ddb-frv8h               0/1     ContainerCreating            0          2s
paas          tmanager-7fc7685ddb-vllmp               1/1     Running                      0          5s

[root@kube-m ~]# kubectl get pod --all-namespaces | grep tman
paas          tmanager-7fc7685ddb-frv8h               1/1     Running                      0          14s
paas          tmanager-7fc7685ddb-vllmp               1/1     Running                      0          17s

查看镜像是第一个版本
[root@kube-m ~]# kubectl describe pod  tmanager-7fc7685ddb-vllmp -n paas 
Name:               tmanager-7fc7685ddb-vllmp
Namespace:          paas
Priority:           0
PriorityClassName:  <none>
Node:               kube-node7/192.168.40.112
Start Time:         Fri, 12 Jul 2019 18:16:51 +0800
Labels:             app=tmanager
                    pod-template-hash=7fc7685ddb
Annotations:        <none>
Status:             Running
IP:                 172.17.89.12
Controlled By:      ReplicaSet/tmanager-7fc7685ddb
Containers:
  tmanager:
    Container ID:   docker://7f201a37b3249722c89cdf8046dc8543030f857c9ec1b861270deac127a8912b
    Image:          registry:5000/wecloud/new-tm:1.2.4.33
    Image ID:       docker-pullable://registry:5000/wecloud/new-tm@sha256:88a057058b02cf07f0bee3cadcd949180c0e545b1619857020e577c6881e2876
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值