k8s 常用工作负载控制器

工作负载控制器是什么

工作负载控制器(Workload Controllers)是K8s的一个抽象概念,用于更高级层次对象,部署和管理Pod

在这里插入图片描述

常用工作负载控制器

  • Deployment :无状态应用部署- StatefulSet :有状态应用部署
  • DaemonSet :确保所有Node运行同一个Pod
  • Job :一次性任务
  • Cronjob:定时任务

控制器的作用

  • 管理Pod对象
  • 使用标签与Pod关联
  • 控制器实现了Pod的运维,例如滚动更新、伸缩、副本管理、维护Pod状态等。

Deployment

Deployment的功能

  • 管理Pod和ReplicaSet
  • 具有上线部署、副本设定、滚动升级、回滚等功能
  • 提供声明式更新,例如只更新一个新的Image

应用场景

  • 网站、API、微服务

Deployment 使用流程

在这里插入图片描述

Deployment 部署

部署镜像

方法1:kubectl apply -f xxx.yaml
方法2:kubectl create deployment web --image=ngin:
[root@master test]# cat deploy.yml 
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: test
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: b1
  template:
    metadata:
      labels:
        app: b1
    spec:
      containers:
      - name: b1
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["bin/sh","-c","sleep 9000"]
            

[root@master ~]# kubectl apply -f deploy.yml 
deployment.apps/test created
[root@master ~]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
test-7964df7b4-6mnng   1/1     Running   0          17s
test-7964df7b4-7fvzb   1/1     Running   0          17s
test-7964df7b4-mbjc4   1/1     Running   0          17s


[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
test-7964df7b4-6mnng   1/1     Running   0          38s   10.244.1.72   node1   <none>           <none>
test-7964df7b4-7fvzb   1/1     Running   0          38s   10.244.2.56   node2   <none>           <none>
test-7964df7b4-mbjc4   1/1     Running   0          38s   10.244.1.73   node1   <none>           <none>
Deployment 升级
应用升级(更新镜像三种方式)
kubectl apply -f xxx.yaml
kubectl set image deployment/web nginx=nginx
kubectl edit deployment/web		#此方法不建议使用

[root@master ~]# kubectl edit deployment/test
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"test","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"b1"}},"template":{"metadata":{"labels":{"app":"b1"}},"spec":{"containers":[{"command":["bin/sh","-c","sleep 9000"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"b1"}]}}}}
  creationTimestamp: "2021-12-24T16:51:43Z"
  generation: 1
  name: test
  namespace: default
  resourceVersion: "103741"
  uid: 51f69c70-b576-4db7-986d-8308e666a50f
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: b1
滚动升级

K8s对寸Pod升级的默认策略,通过使用新版本Pod逐步更新旧版本Pod,实现零停机发布,用户无感知。
在这里插入图片描述

滚动升级在K8s中的实现

  • 1个Deployment.
  • 2个ReplicaSet

在这里插入图片描述

滚动更新策略

  • maxSurge:滚动更新过程中最大Pod副本数,确保在更新时启动的Pod数量比期望(replicas)Pod数量最大多出25%
  • maxUnavailable:滚动更新过程中最大不可用Pod副本数,确保在更新时最大25% Pod数量不可用,即确保75% Pod数量是可用状态
  • 有四舍五入的功能,是根据副本数量而决定的(maxSurge按最大算,比如2.1个,它算3个,maxUnavailable反之,按最小的算)
示例
[root@master ~]# vi test.yml 
[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 4
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: jiejiehao/httpd:v1
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f test.yml 
deployment.apps/web created


[root@master ~]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
web-578dc96fcd-4sxqp   1/1     Running   0          3m24s
web-578dc96fcd-5rj4h   1/1     Running   0          3m24s
web-578dc96fcd-bzvd5   1/1     Running   0          3m24s
web-578dc96fcd-wqgbg   1/1     Running   0          3m24s


//更新
[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 4
  selector:
    matchLabels:
      app: web
  strategy:
    rollingUpdate:
      maxSurge: 50%           #默认是25%  
      maxUnavailable: 50%	  #默认是25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: jiejiehao/httpd:v2   #更新镜像
        imagePullPolicy: IfNotPresent


//更新    
[root@master test]# kubectl apply -f test.yml 
deployment.apps/web configured

//删除2个旧的,更新4个新的
[root@master ~]# kubectl apply -f test.yml 
deployment.apps/web configured
[root@master ~]# kubectl get pod
NAME                   READY   STATUS              RESTARTS   AGE
web-578dc96fcd-4sxqp   1/1     Running             0          10m
web-578dc96fcd-5rj4h   1/1     Running             0          10m
web-578dc96fcd-bzvd5   1/1     Terminating         0          10m
web-578dc96fcd-wqgbg   1/1     Terminating         0          7m16s
web-58b97c8959-88kpk   0/1     ContainerCreating   0          1s
web-58b97c8959-htvpq   0/1     ContainerCreating   0          1s
web-58b97c8959-hxdwg   0/1     ContainerCreating   0          1s
web-58b97c8959-vwpjj   0/1     ContainerCreating   0          1s


[root@master ~]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
web-58b97c8959-88kpk   1/1     Running   0          73s
web-58b97c8959-htvpq   1/1     Running   0          73s
web-58b97c8959-hxdwg   1/1     Running   0          73s
web-58b97c8959-vwpjj   1/1     Running   0          73s
水平扩缩容

水平扩缩容(启动多实例,提高并发)

  • 修改yaml里replicas值,再apply
  • kubectl scale deployment web --replicas=10

注: replicas参数控制Pod副本数量
在这里插入图片描述

[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 2   #决定pod的数量
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: jiejiehao/httpd:v2
        imagePullPolicy: IfNotPresent

[root@master test]# kubectl apply -f test.yml 
deployment.apps/web configured


[root@master ~]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
web-58b97c8959-88kpk   1/1     Running   0          11m
web-58b97c8959-hxdwg   1/1     Running   0          11m


[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 5   #修改为5个
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: jiejiehao/httpd:v2
        imagePullPolicy: IfNotPresent
            
//更新            
[root@master ~]# kubectl apply -f test.yml 
deployment.apps/web configured

//新加3[root@master ~]# kubectl get pod
NAME                   READY   STATUS              RESTARTS   AGE
web-58b97c8959-67l82   0/1     ContainerCreating   0          2s
web-58b97c8959-87k4t   0/1     ContainerCreating   0          2s
web-58b97c8959-88kpk   1/1     Running             0          12m
web-58b97c8959-dg8zp   0/1     ContainerCreating   0          2s
web-58b97c8959-hxdwg   1/1     Running             0          12m
Deployment 回滚

回滚(发布失败恢复正常版本)

kubectl rollout history deployment/web 				#查看历史发布版本
kubectl rollout undo deployment/web					#回滚上一个版本
kubectl rollout undo deployment/web --to-revision=2 #回滚历史指定版本:回滚是重新部署某一次部署时的状态,即当时版本所有配置

//v1版
[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: jiejiehao/httpd:v1
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f test.yml 
deployment.apps/web configured
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS        RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
web-578dc96fcd-fd7nn   1/1     Running       0          7s    10.244.2.64   node2   <none>           <none>
web-578dc96fcd-zvxfp   1/1     Running       0          8s    10.244.1.78   node1   <none>           <none>
  10.244.1.76   node1   <none>           <none>
[root@master ~]# curl 10.244.2.64
test page on v1
[root@master ~]# curl 10.244.1.78
test page on v1

//v2版
[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: jiejiehao/httpd:v2
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f test.yml 
deployment.apps/web configured
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS        RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
web-58b97c8959-lx67f   1/1     Running       0          8s    10.244.2.65   node2   <none>           <none>
web-58b97c8959-td88l   1/1     Running       0          9s    10.244.1.79   node1   <none>           <none>
[root@master ~]# curl 10.244.2.65
test page on v2
[root@master ~]# curl 10.244.1.79
test page on v2


//v3版
[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: jiejiehao/httpd:v3
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f test.yml 
deployment.apps/web configured
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS        RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
web-5bbdcb8db8-v29jb   1/1     Running       0          1s      10.244.1.80   node1   <none>           <none>
web-5bbdcb8db8-xp9z5   1/1     Running       0          3s      10.244.2.66   node2   <none>           <none>
[root@master ~]# curl 10.244.1.80
test page on v3
[root@master ~]# curl 10.244.2.66
test page on v3


//回滚上一个版本
[root@master ~]# kubectl rollout undo deploy/web
deployment.apps/web rolled back
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS        RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
web-58b97c8959-cncdp   1/1     Running       0          4s      10.244.2.67   node2   <none>           <none>
web-58b97c8959-tb82x   1/1     Running       0          3s      10.244.1.81   node1   <none>           <none>
[root@master ~]# curl 10.244.2.67
test page on v2
[root@master ~]# curl 10.244.1.81
test page on v2

//查看历史版本
[root@master ~]# kubectl rollout history deploy/web
deployment.apps/web 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>
5         <none>

//回到指定版本
[root@master ~]# kubectl rollout undo deployment/web --to-revision=3
deployment.apps/web rolled back
[root@master ~]# kubectl get pod -o wide
NAME                   READY   STATUS        RESTARTS   AGE    IP            NODE    NOMINATED NODE   READINESS GATES
web-578dc96fcd-29s52   1/1     Running       0          6s     10.244.2.68   node2   <none>           <none>
web-578dc96fcd-xf6v8   1/1     Running       0          4s     10.244.1.82   node1   <none>           <none>
[root@master ~]# curl 10.244.2.68
test page on v1
[root@master ~]# curl 10.244.1.82
test page on v1
Deployment 下线
kubectl delete -f test.yml 

kubectl delete deploy/web

kubectl delete svc/web

kubectl delete pods --all 

Deployment ReplicaSet

在这里插入图片描述

ReplicaSet控制器用途

  • Pod副本数量管理,不断对比当前Pod数量与期望Pod数量
  • Deployment每次发布都会创建一个RS作为记录,用于实现回滚
  • 默认为一个Pod数量
[root@master ~]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
web-578dc96fcd-29s52   1/1     Running   0          5m3s
web-578dc96fcd-xf6v8   1/1     Running   0          5m1s

[root@master ~]# kubectl get rs
NAME             DESIRED   CURRENT   READY   AGE
web-578dc96fcd   2         2         2       69m
web-58b97c8959   0         0         0       58m
web-5bbdcb8db8   0         0         0       11m
版本

					

DaemonSet

在这里插入图片描述

DaemonSet功能

  • 在每一个Node上运行一个Pod
  • 新加入的Node也同样会自动运行一个Pod

应用场景

  • 网络插件(kube-proxy、calico)、其他Agent

示例:部署一个日志采集程序

[root@master ~]# cat daemon.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      containers:
      - name: log
        image: elastic/filebeat:7.16.2
        imagePullPolicy: IfNotPresent
[root@master ~]# kubectl apply -f daemon.yaml 
daemonset.apps/filebeat created

//查看本地节点
[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   7d    v1.20.0
node1    Ready    <none>                 7d    v1.20.0
node2    Ready    <none>                 7d    v1.20.0


//DaemonSet类型的会在除了master节点以外的所有节点上创建
[root@master ~]# kubectl get pod -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
filebeat-msjtt                   1/1     Running   0          2m22s   10.244.1.83      node1    <none>           <none>
filebeat-vq57z                   1/1     Running   0          2m22s   10.244.2.69      node2    <none>           <none>



//加入新的节点
//注释swap分区
[root@node3 ~]# vim /etc/fstab
[root@node3 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Dec 14 18:51:02 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cs-root     /                       xfs     defaults        0 0
UUID=fa349a7f-bc23-44bd-a673-fcc0c057038b /boot                   xfs     defaults        0 0
#/dev/mapper/cs-swap     none                    swap    defaults        0 0


//关闭防火墙
[root@node3 ~]# systemctl disable --now firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node3 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config


//重启
[root@node3 ~]# reboot


安装Docker
//获取源
[root@node3 ~]# yum -y install wget

[root@node3 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@node3 ~]# yum -y install docker-ce
[root@node3 ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

[root@node3 ~]# docker --version
Docker version 20.10.12, build e91ed57
[root@master ~]# ls /etc/docker/
key.json


//镜像加速
[root@master ~]# cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF


添加kubernetes阿里云YUM软件源
[root@node3 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@node3 ~]# yum clean all && yum makecache


//安装kubeadm,kubelet和kubectl,由于版本更新频繁,这里指定版本号部署
[root@node3 ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
[root@node3 ~]# systemctl enable  kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

[root@node3 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: inactive (dead)
     Docs: https://kubernetes.io/docs/
                

                
//master端做域名解析                
[root@master ~]# cat /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.47.115 master master.example.com
192.168.47.120 node1 node1.example.com
192.168.47.121 node2 node2.example.com
192.168.47.131 node3 node3.example.com

//做免密登录
[root@master ~]# ssh-copy-id node3


!!!
注意:不要用之前的命令(新的命令生成方式请看我文章的最下面)
//加入集群
[root@node3 ~]# kubeadm join 192.168.47.115:6443 --token wtx1cm.z2xq3swccpniimns     --discovery-token-ca-cert-hash sha256:dcb277d24911daee60c35d3d2d53268c7d4df24b8f29f7ceccc49236d1f7ede9 
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03
        [WARNING Hostname]: hostname "node3" could not be reached
        [WARNING Hostname]: hostname "node3": lookup node3 on 192.168.47.2:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.



[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   7d1h    v1.20.0
node1    Ready    <none>                 7d1h    v1.20.0
node2    Ready    <none>                 7d1h    v1.20.0
node3    Ready    <none>                 4m29s   v1.20.0


//会在新加入的节点上创建
[root@master ~]# kubectl get pod -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
filebeat-f8c85                   1/1     Running   0          4m47s   10.88.0.2        node3    <none>           <none>
filebeat-msjtt                   1/1     Running   0          77m     10.244.1.83      node1    <none>           <none>
filebeat-vq57z                   1/1     Running   0          77m     10.244.2.69      node2    <none>           <none>

Job

Job分为普通任务(Job)和定时任务(CronJob):一次性执行
应用场景:离线数据处理,视频解码等业务

[root@master ~]# cat job.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: b1
spec:
  template:
    spec:
      containers:
      - name: b1
        image: busybox
        imagePullPolicy: IfNotPresent
        command: [" /bin/sh" , "-c" , "echo hello world"]
      restartPolicy: Never
  backoffLimit: 2

[root@master ~]# kubectl apply -f job.yaml 
job.batch/b1 created

[root@master ~]# kubectl get pod 
NAME                   READY   STATUS               RESTARTS   AGE
b1-2ph26               0/1     ContainerCannotRun   0          8s
b1-jm9lc               0/1     ContainerCannotRun   0          6s


[root@master ~]# kubectl describe job/b1
Name:           b1
Namespace:      default
Selector:       controller-uid=381e852d-f3a9-450d-b03a-0865aa6af25e
Labels:         controller-uid=381e852d-f3a9-450d-b03a-0865aa6af25e
                job-name=b1
Annotations:    <none>
Parallelism:    1
Completions:    1
Start Time:     Sat, 25 Dec 2021 03:14:56 +0800
Pods Statuses:  1 Running / 0 Succeeded / 2 Failed
Pod Template:
  Labels:  controller-uid=381e852d-f3a9-450d-b03a-0865aa6af25e
           job-name=b1
  Containers:
   b1:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
       /bin/sh
      -c
      echo hello world
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  21s   job-controller  Created pod: b1-2ph26
  Normal  SuccessfulCreate  19s   job-controller  Created pod: b1-jm9lc
  Normal  SuccessfulCreate  9s    job-controller  Created pod: b1-d57gq

CeonJob

CronJob用于实现定时任务,像Linux的Crontab一样:定时任务
应用场景:通知,备份

[root@master ~]# cat cronjob.yaml 
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command: 
            - /bin/sh
            - "-c"
            - "date;echo Hello world"
          restartPolicy: OnFailure
            
[root@master ~]# kubectl apply -f cronjob.yaml 
cronjob.batch/hello created


[root@master test]# kubectl get pod 
NAME                     READY   STATUS      RESTARTS   AGE
hello-1640373480-p52l8   0/1     Completed            0          9s

[root@master test]# kubectl get pod 		#每隔一分钟
NAME                     READY   STATUS              RESTARTS   AGE
hello-1640373480-p52l8   0/1     Completed            0          62s
hello-1640373540-k97tx   0/1     ContainerCreating    0          1s

遇到的报错

(将新的节点加入集群时遇到的)

第一个问题

//等待很久
[root@node3 ~]# kubeadm join 192.168.47.115:6443 --token eq122f.vlb4b7xormnafq0d     --discovery-token-ca-cert-hash sha256:dcb277d24911daee60c35d3d2d53268c7d4df24b8f29f7ceccc49236d1f7ede9
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03
        [WARNING Hostname]: hostname "node3" could not be reached
        [WARNING Hostname]: hostname "node3": lookup node3 on 192.168.47.2:53: no such host
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "eq122f"
To see the stack trace of this error execute with --v=5 or higher

请添加图片描述

解决方法

出现该问题的原因有很多,但主要有两个:

1、token 过期

此时需要通过kubedam重新生成token

//生成新的`token`
[root@master ~]# kubeadm token generate
wtx1cm.z2xq3swccpniimns

[root@master ~]# kubeadm token create wtx1cm.z2xq3swccpniimns  --print-join-command --ttl=0
kubeadm join 192.168.47.115:6443 --token wtx1cm.z2xq3swccpniimns     --discovery-token-ca-cert-hash sha256:dcb277d24911daee60c35d3d2d53268c7d4df24b8f29f7ceccc49236d1f7ede9 

2、k8s api server不可达

此时需要检查和关闭所有服务器的firewalldselinux

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
systemctl disable firewalld --now

第二个问题

[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

解决方案

modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值