【运维知识大神篇】运维界的超神器Kubernetes教程12(Endpoints资源+service详解四种类型+详解Pod调度的五种方式+Job控制器完成一次性任务+CronJob周期性完成任务)

本篇文章继续给大家介绍Kubernetes的内容,还是依旧的有难度,高难度意味着高工资嘛,这个没啥,还是希望大家继续努力,攻克K8s!本篇文章内容包括Endpoints资源、service详解四种类型,详解Pod调度的五种方式,有污点、污点容忍、节点选择器、亲和性、DaemonSet控制器,Job控制器完成一次性任务和CronJob控制器周期性完成任务。

目录

Endpoints资源

一、案例展示

二、实战展示

service详解

一、service类型

二、LoadBalance类型

三、ExternalName类型

玩转Pod调度

一、污点

1、NoSchedule案例

2、PreferNoSchedule案例

3、NoExecute案例

二、污点容忍

三、节点选择器

四、亲和性

1、节点亲和性

2、Pod亲和性

3、Pod反亲和性

五、DaemonSet控制器

Job控制器

一、案例展示

cronjob控制器

一、案例展示


Endpoints资源

当我们的K8s集群外面已经部署好了服务,可能是Hadoop、ElasticSearch、ClickHouse、MySQL等等这些,此时我们希望其与K8s集群里的Pod进行通信,应该如何去做,方法是采用EndPoints资源对连通SVC和外面的服务,SVC再去与Pod进行通信。简而言之,其作用就是可以映射后端服务,可以定义K8s集群内部或外部的服务的ip及端口。

一、案例展示

以ELK为例编写ep资源

[root@Master231 endpoints]# cat 01-ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
  name: koten-ep-es-cluster
# 配置endpoints后端的IP地址及端口
subsets:
  # 配置IP地址
- addresses:
  - ip: 10.0.0.101
  - ip: 10.0.0.102
  - ip: 10.0.0.103
  # 配置端口
  ports:
  - port: 9200
    name: http
  - port: 9300
    name: tcp

运行资源清单,配置的IP会依次暴露配置的端口

[root@Master231 endpoints]# kubectl apply -f 01-ep.yaml 
endpoints/koten-ep-es-cluster created
[root@Master231 endpoints]# kubectl get ep
NAME                  ENDPOINTS                                                     AGE
koten-ep-es-cluster   10.0.0.101:9200,10.0.0.102:9200,10.0.0.103:9200 + 3 more...   5s
kubernetes            10.0.0.231:6443                                               8s
[root@Master231 endpoints]# kubectl describe ep koten-ep-es-cluster 
Name:         koten-ep-es-cluster
Namespace:    default
Labels:       <none>
Annotations:  <none>
Subsets:
  Addresses:          10.0.0.101,10.0.0.102,10.0.0.103
  NotReadyAddresses:  <none>
  Ports:
    Name  Port  Protocol
    ----  ----  --------
    http  9200  TCP
    tcp   9300  TCP

Events:  <none>

二、实战展示

endpoints映射K8s集群外部服务

1、在K8s外部部署MySQL服务,就在harbor主机上部署了

[root@Harbor250 ~]# docker run -e MYSQL_ROOT_PASSWORD=123456 -d --name tomcat-db --network host --restart always harbor.koten.com/koten-db/mysql:5.7
[root@Harbor250 ~]# ss -ntl|grep 3306
LISTEN     0      80        [::]:3306                  [::]:*                  

2、编写ep资源清单,给K8s集群内部和外部创建一个通信的纽带

[root@Master231 endpoints]# cat 02-ep-mysql57.yaml
apiVersion: v1
kind: Endpoints
metadata:
  name: koten-mysql-ep
subsets:
- addresses:
  - ip: 10.0.0.250
  # 配置端口
  ports:
  - port: 3306
    name: mysql-ep

3、编写svc用于连接ep和pod,此处因为是ep资源,没有用到label,是通过metadata,资源的名字、配置的端口、name这三项进行匹配到ep资源

[root@Master231 endpoints]# cat 03-svc-mysql57.yaml
apiVersion: v1
kind: Service
metadata:
  name: koten-mysql-ep
spec:
  type: ClusterIP
  ports:
  - port: 3306
    name: mysql-ep

4、编写K8s集群内部tomcat的部署,里面还使用了coreDNS

[root@Master231 endpoints]# cat 04-deploy-tomcat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: koten-tomcat-app
spec:
  replicas: 1
  selector:
    matchExpressions:
    - key: app
      operator: Exists
  template:
    metadata:
      labels:
        app: koten-tomcat-app
    spec:
      containers:
        - name: tomcat
          image: harbor.koten.com/koten-web/tomcat:v1
          ports:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: koten-mysql-ep
          - name: MYSQL_SERVICE_PORT
            value: '3306'

5、编写tomcat的svc,用于暴露tomcat的端口

[root@Master231 endpoints]# cat 05-svc-tomcat.yaml
apiVersion: v1
kind: Service
metadata:
  name: koten-tomcat-app
spec:
  type: NodePort
  selector:
     app: koten-tomcat-app
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 8080

6、运行测试,发现Pod正常运行,svc的端口也映射出来了,浏览器访问随便一个工作节点加端口号,发现tomcat正常部署

[root@Master231 endpoints]# kubectl apply -f .
endpoints/koten-ep-es-cluster configured
endpoints/koten-mysql-ep created
service/koten-mysql-ep created
deployment.apps/koten-tomcat-app created
service/koten-tomcat-app created
[root@Master231 endpoints]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
koten-tomcat-app-76488c448d-hdf4w   1/1     Running   0          4s
[root@Master231 endpoints]# kubectl get svc
NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
koten-mysql-ep     ClusterIP   10.200.194.145   <none>        3306/TCP        27s
koten-tomcat-app   NodePort    10.200.73.182    <none>        8080:8080/TCP   27s
kubernetes         ClusterIP   10.200.0.1       <none>        443/TCP         18m

7、在tomcat连通数据库的页面进行新增数据,发现成功新增,尝试删除启动的所有资源,自动拉起后数据没有丢失,在K8s内部也没有做数据持久化,是因为连通了集群外部的MySQL

删除所有资源,再重新拉起

[root@Master231 endpoints]# kubectl delete -f .
endpoints "koten-ep-es-cluster" deleted
endpoints "koten-mysql-ep" deleted
service "koten-mysql-ep" deleted
deployment.apps "koten-tomcat-app" deleted
service "koten-tomcat-app" deleted
[root@Master231 endpoints]# kubectl apply -f .
endpoints/koten-ep-es-cluster created
endpoints/koten-mysql-ep created
service/koten-mysql-ep created
deployment.apps/koten-tomcat-app created
service/koten-tomcat-app created
[root@Master231 endpoints]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
koten-tomcat-app-76488c448d-gzq2l   1/1     Running   0          30s

刷新刚刚tomcat连通数据库的页面,可以看到Pod名称已经发生了变化,但是数据并没有丢失

service详解

service主要解决Pod的动态变化,提供统一的访问入口。主要有两个作用,一个是通过标签去关联Pod,实现服务发现的功能,还有一个是基于iptables或者ipvs实现负载均衡的功能。在创建或删除svc时,会自动关联一个同名的ep资源。

一、service类型

ClusterIP,将集群内部的服务暴露给其他的 Kubernetes 服务或者 Pod 访问,可以基于service名称的访问,这需要依赖于coreDNS组件是正常工作的。也可以理解成K8s集群内部提供的访问VIP。

NodePort,主要用于K8s集群以外的服务主动访问运行在K8s集群的内部服务。在ClusterIP的基础之上监听了所有worker节点的端口。

LoadBalance,用于公有云环境的服务暴露,也就是说一般用于云服务的K8s集群。

EXternalName,用于将K8s集群外部的服务映射至K8s集群内部访问,让集群内部的Pod能通过固定的service名称访问集群外部的服务,也可以说是当访问SVC是可以将请求解析为另一个域名,类似于cname的技术,也有点类似于上面的Endpoints,有时候也用于将不同namespace之间的Pod进行访问。

前面两个先前博客已经介绍过了ClusterIP和NodePort,可以见下面文章

【运维知识大神篇】运维界的超神器Kubernetes教程5(初始化容器+configMap进阶详解+RC控制器控制Pod+svc暴露pod端口实现负载均衡+健康状态检查探针livenessProbe)_我是koten的博客-CSDN博客

【运维知识大神篇】运维界的超神器Kubernetes教程7(NodePort+名称空间+ipvs工作模式+可用探针和启动探针+静态Pod+Pod优雅终止+Pod创建删除流程+rc和svc实现应用升级)_我是koten的博客-CSDN博客

接下来给大家介绍后面的两个类型

二、LoadBalance类型

使用该类型的前提资源是需要使用云平台的环境,将K8s在任意云平台的环境进行部署后都可以使用该类型,比如阿里云,腾讯云,京东云等。我这边就不用云平台了,使用虚拟机给大家简单展示下。

1、创建svc资源清单,使用loadbalancer类型

[root@Master231 svc]# cat 07-svc-loadbalance.yaml
kind: Service
apiVersion: v1
metadata:
  name: svc-loadbalancer
spec:
  # 指定service类型为LoadBalancer,注意,一般用于云环境
  type: LoadBalancer
  selector:
     app: web
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30080

2、配置云环境的应用负载均衡器

添加监听器规则,比如访问负载均衡器的80端口,反向代理到30080端口。就是访问云环境的应用服务器的哪个端口,把他反向代理到K8S集群的node端口为30080即可。我这边不做演示了,之前写过阿里云负载均衡解析的文章,可以参考。

【运维知识进阶篇】用阿里云部署kod可道云网盘项目(HTTPS证书+负载均衡+两台web)_我是koten的博客-CSDN博客

3、用户访问应用负载均衡器的端口

用户直接访问云环境应用服务器的80端口,请求就会自动转发到云环境nodePort的30080端口。

三、ExternalName类型

1、编写svc的ExternalName类型的资源清单,将外部的服务映射进K8s内部进行访问

[root@Master231 svc]# cat 08-svc-externalname.yaml
apiVersion: v1
kind: Service
metadata:
  name: svc-externalname
spec:
  # svc类型
  type: ExternalName
  # 指定外部域名
  externalName: www.baidu.com
  selector:
     app: web

2、再编写一个Pod的资源清单并允许,标签为app=web,使svc可以发现到。

[root@Master231 pod]# cat 36-pods-nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  name: linux-web
  labels:
    app: web
spec:
  containers:
  - name: web
    image: harbor.koten.com/koten-web/nginx:1.25.1-alpine
[root@Master231 pod]# kubectl apply -f 36-pods-nginx.yaml
pod/linux-web created

3、运行svc的资源清单,可以看到ep字段显示了网址

[root@Master231 svc]# kubectl get svc
NAME               TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)   AGE
kubernetes         ClusterIP      10.200.0.1   <none>          443/TCP   18m
svc-externalname   ExternalName   <none>       www.baidu.com   <none>    31s
[root@Master231 svc]# kubectl describe svc svc-externalname
Name:              svc-externalname
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=web
Type:              ExternalName
IP Families:       <none>
IP:                
IPs:               <none>
External Name:     www.baidu.com
Session Affinity:  None
Events:            <none>

4、连通linux-web的pod,尝试ping下svc的字段,发现成功ping到了百度的IP上,再用主机ping下,确认是百度的IP了,因为可能是做了CDN,IP前面的一致,最后不一致

[root@Master231 svc]# kubectl exec -it linux-web -- sh
/ # ping svc-externalname.default
PING svc-externalname.default (110.242.68.4): 56 data bytes
64 bytes from 110.242.68.4: seq=0 ttl=127 time=11.825 ms
64 bytes from 110.242.68.4: seq=1 ttl=127 time=15.469 ms
^C
--- svc-externalname.default ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 11.825/13.647/15.469 ms
/ # 
[root@Master231 svc]# ping baidu.com
PING baidu.com (110.242.68.66) 56(84) bytes of data.
64 bytes from 110.242.68.66 (110.242.68.66): icmp_seq=1 ttl=128 time=12.3 ms
^C
--- baidu.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 12.378/12.378/12.378/0.000 ms

5、也可以使用dig去反向解析下IP,发现是百度的A记录,说明访问我们的svc的名称,请求会被cname到百度的A记录。

[root@Master231 svc]# dig -x 110.242.68.4 @8.8.8.8
......
;; AUTHORITY SECTION:
110.in-addr.arpa.	1196	IN	SOA	ns1.apnic.net. read-txt-record-of-zone-first-dns-admin.apnic.net. 3006098519 7200 1800 604800 3600
......

6、这样的使用方式并不多,域名解析的话,直接配置DNS的解析的方式比较多,因此此处了解下即可。

玩转Pod调度

Pod调度的方式有很多,例如最简单的nodeName指定节点部署Pod,我这边再列举五种,有污点,污点容忍,节点选择器,亲和性,daemonSet控制器。我这个是根据explain的文档的字段来划分的,也可以按照名字划分成nodeName、污点与污点容忍、节点选择器、亲和性与反亲和性、daemonSet控制器这五种。

他们都是用于调度Pod,同一种需求可能有很多种调度Pod的方法,我们在这个过程种需要考虑的是尽可能少的减少对其他调度的影响,有点类似于权限管理,权限设置能少尽少,调度的影响能小尽小。接下来给大家一一介绍下

一、污点

1、污点概述

污点通常情况下是作用在worker节点上的,可以影响Pod的调度

语法格式为:key[=value]:effect

相关字段说明

1、key               字母或数字的开头,可以包含字母、数字、连字符(-)、点(.)和下划线(_),最多253个字符,也可以以DNS子域前缀和单个"/"开头。
2、value             该值是可选的。如果给定,它必须以字母或数字开头,可以包含字母、数字、连字符、点和下划线,最多63个字符。
3、effect            effect必须是NoSchedule、PreferNoSchedule或NoExecute。
4、NoSchedule        该节点不再接收新的Pod调度,但不会驱赶已经调度到该节点的Pod。
5、PreferNoSchedule  该节点可以接受调度,但会尽可能将Pod调度到其他节点,换句话说,让该节点的调度优先级降低啦。
6、NoExecute         该节点不再接收新的Pod调度,与此同时,会立刻驱逐已经调度到该节点的Pod。

注意
1、在K8S 1.23.17版本中,PreferNoSchedule测试效果并不是特别明显,
2、PreferNoSchedule和NoSchedule两种污点均可以被nodeName无视,因为不会走默认的调度器。
3、NoExecute污点无视nodeName,尽管调度到该节点,由于该污点的存在,直接会驱逐Pod,又由于nodeName会执行该Pod,所以会导致反复调度驱逐,最终产生很多待消失状态的Pod。

1、NoSchedule案例

master节点默认就是打了 node-role.kubernetes.io/master:NoSchedule 的污点,所以我们在进行调度时,通常不会在该节点上进行调度,除非是nodeName指定了,或者污点容忍等原因。

[root@Master231 manifests]# kubectl get no
NAME        STATUS   ROLES                  AGE   VERSION
master231   Ready    control-plane,master   8d    v1.23.17
worker232   Ready    <none>                 8d    v1.23.17
worker233   Ready    <none>                 8d    v1.23.17
[root@Master231 manifests]# kubectl describe nodes | grep -i taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>

1、打污点

[root@Master231 manifests]# kubectl taint node worker232 author=koten:NoSchedule
node/worker232 tainted
[root@Master231 manifests]# kubectl describe nodes | grep -i taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             author=koten:NoSchedule
Taints:             <none>

2、编写资源清单运行,查看调度状态,发现由于232打了污点,所以只能调度到233上了

[root@Master231 pods-scheduler]# cat 01-taint.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-taint-hobby
spec:
  replicas: 5
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "v1"
      - "v2"
      operator: NotIn
  template:
    metadata:
      labels:
        hobby: linux
    spec:
      containers:
      - name: v1
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine
[root@Master231 pods-scheduler]# kubectl apply -f 01-taint.yaml 
deployment.apps/deploy-taint-hobby created
[root@Master231 pods-scheduler]# kubectl get pods -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taint-hobby-d7f9d4885-4cphk   1/1     Running   0          72s   10.100.2.230   worker233   <none>           <none>
deploy-taint-hobby-d7f9d4885-d25rb   1/1     Running   0          72s   10.100.2.234   worker233   <none>           <none>
deploy-taint-hobby-d7f9d4885-fjvfw   1/1     Running   0          72s   10.100.2.231   worker233   <none>           <none>
deploy-taint-hobby-d7f9d4885-h8549   1/1     Running   0          72s   10.100.2.232   worker233   <none>           <none>
deploy-taint-hobby-d7f9d4885-v6pr2   1/1     Running   0          72s   10.100.2.233   worker233   <none>           <none>

3、取消污点

[root@Master231 pods-scheduler]# kubectl taint node worker232 author-
node/worker232 untainted
[root@Master231 pods-scheduler]# kubectl describe nodes | grep -i taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>

2、PreferNoSchedule案例

PreferNoSchedule是指给这个节点打上污点后,非必要,不在此节点上运行,这种状态的不好测试,是非必要不在打了PreferNoSchedule污点的上面调度

1、给worker232打污点

[root@Master231 pods-scheduler]# kubectl taint node worker232 author=koten:PreferNoSchedule
node/worker232 tainted
[root@Master231 pods-scheduler]# kubectl describe nodes | grep -i taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             author=koten:PreferNoSchedule
Taints:             <none>

2、编写资源清单并验证,发现都调度到了233上,因为还没有到一定要用232的程度

[root@Master231 pods-scheduler]# cat 02-taint.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-taint
spec:
  replicas: 5
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "v1"
      - "v2"
      operator: NotIn
  template:
    metadata:
      labels:
        author: koten
    spec:
      # 如果测试效果不明显,可以使用nodeName来测试
      #nodeName: worker232
      containers:
      - name: v1
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine
[root@Master231 pods-scheduler]# kubectl apply -f 02-taint.yaml
deployment.apps/deploy-taint created
[root@Master231 pods-scheduler]# kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taint-6f84f9cb7-d75dx   1/1     Running   0          14s   10.100.2.245   worker233   <none>           <none>
deploy-taint-6f84f9cb7-gtqhf   1/1     Running   0          14s   10.100.2.248   worker233   <none>           <none>
deploy-taint-6f84f9cb7-mpmjp   1/1     Running   0          14s   10.100.2.246   worker233   <none>           <none>
deploy-taint-6f84f9cb7-pkxgs   1/1     Running   0          14s   10.100.2.249   worker233   <none>           <none>
deploy-taint-6f84f9cb7-vd956   1/1     Running   0          14s   10.100.2.247   worker233   <none>           <none>

3、我们取消资源清单的nodeName的注释,再次运行,发现233上的Pod滚动式的调度到了232上,这是因为nodeName无视PreferNoSchedule的污点

[root@Master231 pods-scheduler]# kubectl apply -f 02-taint.yaml
deployment.apps/deploy-taint configured

[root@Master231 pods-scheduler]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taint-75cb5dccdd-gsqn5   1/1     Running   0          7s    10.100.1.153   worker232   <none>           <none>
deploy-taint-75cb5dccdd-hxx5n   1/1     Running   0          10s   10.100.1.151   worker232   <none>           <none>
deploy-taint-75cb5dccdd-qdc4h   1/1     Running   0          10s   10.100.1.150   worker232   <none>           <none>
deploy-taint-75cb5dccdd-st8gv   1/1     Running   0          7s    10.100.1.152   worker232   <none>           <none>
deploy-taint-75cb5dccdd-vxsbx   1/1     Running   0          10s   10.100.1.149   worker232   <none>           <none>

3、NoExecute案例

NoExecute的力度最大,打上污点后不仅不能再往上面进行Pod调度,原有的Pod也会移到别的节点,如果你在此时指定了nodeName的话,那麻烦就大了,会不断的重复Pod调度再移除的过程,可能会导致你的节点崩溃!!!

1、我们先将232和233的污点都移除,运行资源清单,创建10个Pod,发现均匀的分布在了232和233这两个节点

[root@Master231 pods-scheduler]# cat 03-taint.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-taint
spec:
  replicas: 10
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "v1"
      - "v2"
      operator: NotIn
  template:
    metadata:
      labels:
        author: koten
    spec:
      # 如果测试效果不明显,可以使用nodeName来测试
      containers:
      - name: v1
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine
[root@Master231 pods-scheduler]# kubectl apply -f 03-taint.yaml 
deployment.apps/deploy-taint created
[root@Master231 pods-scheduler]# kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-taint-6f84f9cb7-8x6k5   1/1     Running   0          9s    10.100.1.169   worker232   <none>           <none>
deploy-taint-6f84f9cb7-cgqcl   1/1     Running   0          9s    10.100.2.2     worker233   <none>           <none>
deploy-taint-6f84f9cb7-dq9sm   1/1     Running   0          9s    10.100.1.170   worker232   <none>           <none>
deploy-taint-6f84f9cb7-fzkjt   1/1     Running   0          9s    10.100.1.171   worker232   <none>           <none>
deploy-taint-6f84f9cb7-gqhv2   1/1     Running   0          9s    10.100.1.172   worker232   <none>           <none>
deploy-taint-6f84f9cb7-gqx5r   1/1     Running   0          9s    10.100.1.173   worker232   <none>           <none>
deploy-taint-6f84f9cb7-grfff   1/1     Running   0          9s    10.100.2.3     worker233   <none>           <none>
deploy-taint-6f84f9cb7-htxh8   1/1     Running   0          9s    10.100.2.6     worker233   <none>           <none>
deploy-taint-6f84f9cb7-sbsbr   1/1     Running   0          9s    10.100.2.5     worker233   <none>           <none>
deploy-taint-6f84f9cb7-tnxw9   1/1     Running   0          9s    10.100.2.4     worker233   <none>           <none>

2、给232打上NoExecute的污点并观察调度,看时间也可以看出来,之前调度的Pod也重新调度到了233

[root@Master231 pods-scheduler]# kubectl taint node worker232 authpr=koten:NoExecute
node/worker232 tainted
[root@Master231 pods-scheduler]# kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
deploy-taint-6f84f9cb7-55vf5   1/1     Running   0          8s     10.100.2.9    worker233   <none>           <none>
deploy-taint-6f84f9cb7-5ch6t   1/1     Running   0          8s     10.100.2.8    worker233   <none>           <none>
deploy-taint-6f84f9cb7-cbfwh   1/1     Running   0          8s     10.100.2.10   worker233   <none>           <none>
deploy-taint-6f84f9cb7-cgqcl   1/1     Running   0          3m7s   10.100.2.2    worker233   <none>           <none>
deploy-taint-6f84f9cb7-grfff   1/1     Running   0          3m7s   10.100.2.3    worker233   <none>           <none>
deploy-taint-6f84f9cb7-htxh8   1/1     Running   0          3m7s   10.100.2.6    worker233   <none>           <none>
deploy-taint-6f84f9cb7-p6xwx   1/1     Running   0          8s     10.100.2.7    worker233   <none>           <none>
deploy-taint-6f84f9cb7-sbsbr   1/1     Running   0          3m7s   10.100.2.5    worker233   <none>           <none>
deploy-taint-6f84f9cb7-tnxw9   1/1     Running   0          3m7s   10.100.2.4    worker233   <none>           <none>
deploy-taint-6f84f9cb7-z5t6t   1/1     Running   0          8s     10.100.2.11   worker233   <none>           <none>

4、如果你测试了又打了NoExecute污点,又用nodeName指定它的话,可能会出现大量的Pod,当到达一定量时,节点会崩溃,你可以尝试重启那个节点,并且用下面的命令,在节点处于终止状态时强制删除etcd数据

kubectl delete pods --all --force

二、污点容忍

污点容忍听名字就知道是用来容忍污点的,当一个节点有污点,但是我们又很想调度到它上面时,此时我们删除污点,担心影响其他Pod运行,就可以使用污点容忍。污点容忍是Pod调度更加自由,想调度在哪里就调度在哪里。

1、编写资源清单,配置污点容忍,有点类似于rs资源进行规则匹配的格式

[root@Master231 tolerations]# cat 01-tolerations.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-tolerations
spec:
  replicas: 10
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "v1"
      - "v2"
      operator: NotIn
  template:
    metadata:
      labels:
        author: koten
    spec:
      # 配置污点容忍
      tolerations:
      # - key: author
      #   value: koten
      #   effect: NoExecute
      # - key: author
      #   value: koten
      #   effect: PreferNoSchedule
      # - key: hobby
      #   effect: NoSchedule
      # - key: node-role.kubernetes.io/master
      #   effect: NoSchedule
      # 
      # 若不指定key,则operator值必须为Exists,表示匹配所有的key,此时也不需要指定value。
      # 若不指定effect时,则会匹配所有的影响度,即NoExecute,PreferNoSchedule,NoSchedule。
      # 综上所述,如果按照下面的方式配置,则表示无视污点。不建议使用,仅测试时可以使用!
      # 
      # operator可以是Exists或Equal,Exists 表示 Pod 会容忍存在任何一个具有指定污点键的污点。为 Equal 时,表示 Pod 会容忍具有指定污点键和值的污点。
      - operator: Exists    
      containers:
      - name: v1
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine

2、运行资源清单,发现三个节点分布均匀,但是我们有在231和232打了污点,可见污点被容忍了

[root@Master231 tolerations]# kubectl apply -f 01-tolerations.yaml 
deployment.apps/deploy-tolerations created
[root@Master231 tolerations]# kubectl get pods -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-tolerations-658cd4c7b8-9mtjk   1/1     Running   0          5s    10.100.1.174   worker232   <none>           <none>
deploy-tolerations-658cd4c7b8-b8h6z   1/1     Running   0          5s    10.100.2.12    worker233   <none>           <none>
deploy-tolerations-658cd4c7b8-j5b8n   1/1     Running   0          5s    10.100.0.2     master231   <none>           <none>
deploy-tolerations-658cd4c7b8-j7xrt   1/1     Running   0          5s    10.100.1.176   worker232   <none>           <none>
deploy-tolerations-658cd4c7b8-lnvmj   1/1     Running   0          5s    10.100.1.177   worker232   <none>           <none>
deploy-tolerations-658cd4c7b8-mxzns   1/1     Running   0          5s    10.100.2.13    worker233   <none>           <none>
deploy-tolerations-658cd4c7b8-pnpz7   1/1     Running   0          5s    10.100.1.175   worker232   <none>           <none>
deploy-tolerations-658cd4c7b8-rdt4b   1/1     Running   0          5s    10.100.0.4     master231   <none>           <none>
deploy-tolerations-658cd4c7b8-s8jhr   1/1     Running   0          5s    10.100.2.14    worker233   <none>           <none>
deploy-tolerations-658cd4c7b8-wvxmq   1/1     Running   0          5s    10.100.0.3     master231   <none>           <none>
[root@Master231 tolerations]# kubectl describe nodes | grep -i taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             authpr=koten:NoExecute
Taints:             <none>

三、节点选择器

如果节点有污点的话,可以通过污点容忍和nodeName指定节点进行调度,但是我们如果想调度到两个节点应该怎么做呢,这时候就需要用到了节点选择器,有点类似于标签选择器,标签选择器是根据标签关联Pod,节点选择器是根据标签选择节点

案例展示,231和233储存设备为ssd,232储存设备为hdd,我们想用ssd进行调度Pod

1、给节点打标签

[root@Master231 tolerations]# kubectl label nodes master231 type=ssd
node/master231 labeled
[root@Master231 tolerations]# kubectl label nodes worker232 type=hdd
node/worker232 labeled
[root@Master231 tolerations]# kubectl label nodes worker233 type=ssd
node/worker233 labeled

[root@Master231 tolerations]# kubectl get nodes --show-labels
NAME        STATUS   ROLES                  AGE   VERSION    LABELS
master231   Ready    control-plane,master   8d    v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master231,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=,type=ssd
worker232   Ready    <none>                 8d    v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker232,kubernetes.io/os=linux,type=hdd
worker233   Ready    <none>                 8d    v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker233,kubernetes.io/os=linux,type=ssd

2、编写资源清单,并允许,发现是按照配置的ssh标签进行了调度,Pod调度在了231和233上

[root@Master231 nodeSelector]# cat 01-nodeSelector.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-tolerations-nodeselector
spec:
  replicas: 10
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "v1"
      - "v2"
      operator: NotIn
  template:
    metadata:
      labels:
        author: koten
    spec:
      # 基于节点的标签进行调度,将Pod调度到包含key为type,value为ssd的节点上。
      nodeSelector:
        type: ssd
      # 配置污点容忍
      tolerations:
      - key: author
        operator: Exists
      - key: hobby
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: v1
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine
[root@Master231 nodeSelector]# kubectl apply -f 01-nodeSelector.yaml
deployment.apps/deploy-tolerations-nodeselector created
[root@Master231 nodeSelector]# kubectl get pods -o wide
NAME                                               READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
deploy-tolerations-nodeselector-5574bdddb4-6j56v   1/1     Running   0          6s    10.100.0.44   master231   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-6lb4m   1/1     Running   0          6s    10.100.0.46   master231   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-9xmbg   1/1     Running   0          6s    10.100.2.52   worker233   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-dmp45   1/1     Running   0          6s    10.100.2.55   worker233   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-f7b95   1/1     Running   0          6s    10.100.0.43   master231   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-jbbqp   1/1     Running   0          6s    10.100.2.54   worker233   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-k2srk   1/1     Running   0          6s    10.100.0.45   master231   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-s8f57   1/1     Running   0          6s    10.100.0.42   master231   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-sxw7p   1/1     Running   0          6s    10.100.2.53   worker233   <none>           <none>
deploy-tolerations-nodeselector-5574bdddb4-xsfw6   1/1     Running   0          6s    10.100.2.58   worker233   <none>           <none>

四、亲和性

亲和性算是Pod调度里面比较烧脑的一环,分为节点亲和、Pod亲和和Pod反亲和三种,每种的效果都不一样。

1、节点亲和性

节点亲和性是在资源清单中设置了亲和的标签,若节点打上了标签,或是key一致或是key和value都一致,就会去将Pod调度到该节点上,有点类似于节点控制器,不过调度方式支持了表达式,类似于rs资源匹配规则。

1、给节点打标签

[root@Master231 daemonsets]# kubectl label nodes master231 author=koten1
node/master231 labeled
[root@Master231 daemonsets]# kubectl label nodes worker232 author=koten2
node/worker232 labeled
[root@Master231 daemonsets]# kubectl label nodes worker233 author=koten3
node/worker233 labeled

[root@Master231 daemonsets]# kubectl get nodes --show-labels 
NAME        STATUS   ROLES                  AGE   VERSION    LABELS
master231   Ready    control-plane,master   8d    v1.23.17   author=koten1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master231,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=,type=ssd
worker232   Ready    <none>                 8d    v1.23.17   author=koten2,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker232,kubernetes.io/os=linux,type=hdd
worker233   Ready    <none>                 8d    v1.23.17   author=koten3,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker233,kubernetes.io/os=linux,type=ssd

2、编写资源清单,并运行查看调度状态,发现数据均匀调度到了231和232上,也就是我们标签匹配上的节点。

[root@Master231 nodeAffinity]# cat 01-nodeAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-tolerations-nodeaffinity-demo02
spec:
  replicas: 10
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "v1"
      - "v2"
      operator: NotIn
  template:
    metadata:
      labels:
        author: koten
    spec:
      # 基于节点的标签进行调度,将Pod调度到包含key为type,value为ssd的节点上。
      # nodeSelector:
      #   type: ssd
      #
      # 配置亲和性
      affinity:
        # 配置节点亲和性
        nodeAffinity:
           # 硬限制,也有软限制,软限制一般不用,有点类似于PreferNoSchedule,是不必要
          requiredDuringSchedulingIgnoredDuringExecution:
            # 指定节点选择器的匹配方式
            nodeSelectorTerms:
              # 指定基于标签的方式匹配
            - matchExpressions:
                # 指定key的名称
              - key: author
                # 指定value的列表,注意以下事项:
                #    1.当operator为Exists或者DoesNotExist时,该列表必须为空。
                #    2.当operator为In或者NotIn时,该列表必须为非空。
                #    3..当operator为Gt或者Lt时,该列表必须为一个整数。
                values: 
                - koten1
                - koten2
                # 指定key的value的关系,有效值为如下:
                #    In:
                #      key的值在values的列表中。
                #    NotIn:
                #      key的值不在values的列表中。
                #    Exists:
                #      只要key存在即可,value值任意,可以不写。
                #    DoesNotExist:
                #      除了指定的key外,其他key均可以匹配。
                #    Gt:
                #      表示大于的意思。
                #    Lt:
                #      表示小于的意思。
                operator: In
                #operator: Exists
      # 配置污点容忍
      tolerations:
      - key: author
        operator: Exists
      - key: hobby
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: v1
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine

[root@Master231 nodeAffinity]# kubectl apply -f 01-nodeAffinity.yaml
deployment.apps/deploy-tolerations-nodeaffinity-demo02 created
[root@Master231 nodeAffinity]# kubectl get pods -o wide
NAME                                                      READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-tolerations-nodeaffinity-demo02-844fd84d54-5n5bn   1/1     Running   0          39s   10.100.1.198   worker232   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-6hzmb   1/1     Running   0          39s   10.100.1.197   worker232   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-84kc6   1/1     Running   0          39s   10.100.0.55    master231   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-jrh89   1/1     Running   0          39s   10.100.1.199   worker232   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-jvxwl   1/1     Running   0          39s   10.100.0.56    master231   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-nbmjk   1/1     Running   0          39s   10.100.0.57    master231   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-rwlzl   1/1     Running   0          39s   10.100.1.202   worker232   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-xwz58   1/1     Running   0          39s   10.100.1.201   worker232   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-zdbmc   1/1     Running   0          39s   10.100.1.200   worker232   <none>           <none>
deploy-tolerations-nodeaffinity-demo02-844fd84d54-zz6kp   1/1     Running   0          39s   10.100.0.54    master231   <none>           <none>

2、Pod亲和性

Pod亲和性是指当一个Pod在第一次调度到某一个节点后,由于Pod有亲和性,后续的Pod都会调度到该节点。将Pod调度到了期望的拓扑域,这个和第一个Pod调度有关。

1、利用上面节点亲和性给节点的打的标签

2、编写资源清单,运行观察状态,发现都调度到了一个节点上

[root@Master231 podAffinity]# cat 01-podAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-tolerations-podaffinity
spec:
  replicas: 10
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "v1"
      - "v2"
      operator: In
  template:
    metadata:
      labels:
        apps: v1
    spec:
      # 配置亲和性
      affinity:
        # 配置Pod亲和性
        podAffinity:
           # 硬限制
           requiredDuringSchedulingIgnoredDuringExecution:
             # 指定拓扑域,可以理解为基于哪个标签进行分配
           - topologyKey: author
             # 配置标签选择器
             labelSelector:
                # 基于Pod标签匹配
                matchLabels:
                  apps: v1 
      # 配置污点容忍
      tolerations:
      - key: author
        operator: Exists
      - key: hobby
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: v1
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine

[root@Master231 podAffinity]# kubectl apply -f 01-podAffinity.yaml
deployment.apps/deploy-tolerations-podaffinity created
[root@Master231 podAffinity]# kubectl get pods -o wide
NAME                                              READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-tolerations-podaffinity-6846c97b48-2lktv   1/1     Running   0          14s   10.100.1.212   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-45mb9   1/1     Running   0          14s   10.100.1.208   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-6v2q9   1/1     Running   0          14s   10.100.1.211   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-kdrc9   1/1     Running   0          14s   10.100.1.213   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-p5rtz   1/1     Running   0          14s   10.100.1.210   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-pnpw5   1/1     Running   0          14s   10.100.1.209   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-vgn8l   1/1     Running   0          14s   10.100.1.217   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-vxsk9   1/1     Running   0          14s   10.100.1.215   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-wmlz7   1/1     Running   0          14s   10.100.1.216   worker232   <none>           <none>
deploy-tolerations-podaffinity-6846c97b48-wrbhc   1/1     Running   0          14s   10.100.1.214   worker232   <none>           <none>

3、Pod反亲和性

Pod反亲和性是指当一个Pod在第一次调度到某一个节点后,由于Pod有反亲和性,后续的Pod都不会调度到已有Pod的节点。也是将Pod调度到期望的拓扑域。

1、利用上面节点亲和性给节点打的标签

2、编写资源清单,运行发现只运行了三个,因为只有三个节点,由于反亲和性剩下的Pod没有节点去运行了

[root@Master231 podAntiAffinity]# cat 01-podAntiAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-tolerations-podantiaffinity
spec:
  replicas: 10
  selector:
    matchExpressions:
    - key: apps
      values: 
      - "v1"
      - "v2"
      operator: In
  template:
    metadata:
      labels:
        apps: v1
    spec:
      # 基于节点的标签进行调度,将Pod调度到包含key为type,value为ssd的节点上。
      # nodeSelector:
      #   type: ssd
      #
      # 配置亲和性
      affinity:
        # 配置Pod反亲和性
        podAntiAffinity:
           # 硬限制
           requiredDuringSchedulingIgnoredDuringExecution:
             # 指定拓扑域,可以理解为基于哪个标签进行分配
           - topologyKey: author
             # 配置标签选择器
             labelSelector:
                # 基于Pod标签匹配
                matchLabels:
                  apps: v1 
      # 配置污点容忍
      tolerations:
      - key: author
        operator: Exists
      - key: hobby
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: v1
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine
[root@Master231 podAntiAffinity]# kubectl apply -f 01-podAntiAffinity.yaml 
deployment.apps/deploy-tolerations-podantiaffinity created
[root@Master231 podAntiAffinity]# kubectl get pods -o wide
NAME                                                  READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-tolerations-podantiaffinity-5b6b749bd4-2mfrt   0/1     ContainerCreating   0          3s    <none>         master231   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-bshsd   0/1     ContainerCreating   0          3s    <none>         master231   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-j5z25   0/1     ContainerCreating   0          3s    <none>         worker232   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-jtncw   0/1     ContainerCreating   0          3s    <none>         master231   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-jvdf4   0/1     ContainerCreating   0          3s    <none>         worker232   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-khltm   1/1     Running             0          3s    10.100.1.240   worker232   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-ldkbq   0/1     ContainerCreating   0          3s    <none>         worker232   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-nf9hn   0/1     ContainerCreating   0          3s    <none>         worker233   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-p8xf2   1/1     Running             0          3s    10.100.2.72    worker233   <none>           <none>
deploy-tolerations-podantiaffinity-5b6b749bd4-qcmks   1/1     Running             0          3s    10.100.2.74    worker233   <none>           <none>
[root@Master231 podAntiAffinity]# 

五、DaemonSet控制器

DaemonSet控制器是控制每个节点都有且仅有一个部署的Pod,可以用于部署zabbix客户端,nfs客户端,filebeat采集数据等等场景。DaemonSet控制器的优先级较低,当有污点的时候不会在该节点上进行调度。

有人会错误的将反亲和性和DaemonSet控制器进行混淆,我的理解是控制器的目的是都部署上Pod,每个节点有且仅有一个,但是反亲和性的目的是遇到有Pod的节点就躲开,如果数量为2节点是3,那么反亲和性是部署到2个上啊,所以不算是有且仅有一个。

1、编写DaemonSet控制器的资源清单

[root@Master231 daemonsets]# cat 01-ds-yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: koten-ds
spec:
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      # nodeSelector:
      #   type: ssd
      #
      # affinity:
      #   nodeAffinity:
      #      requiredDuringSchedulingIgnoredDuringExecution:
      #        nodeSelectorTerms:
      #        - matchExpressions:
      #          - key: author
      #            values: 
      #            - koten
      #            - yitiantain
      #            operator: In
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: web
        image: harbor.koten.com/koten-web/nginx:1.24.0-alpine

2、运行资源清单查看效果,发现均匀部署到了三个节点

[root@Master231 daemonsets]# kubectl apply -f 01-ds-yaml 
daemonset.apps/koten-ds created
[root@Master231 daemonsets]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
koten-ds-c254n   1/1     Running   0          20s   10.100.1.195   worker232   <none>           <none>
koten-ds-mgspq   1/1     Running   0          20s   10.100.2.65    worker233   <none>           <none>
koten-ds-qz5k9   1/1     Running   0          20s   10.100.0.53    master231   <none>           <none>

3、我们把资源清单的对231的节点容忍进行删除,发现231节点调度的Pod消失了,由此可见它优先级不高

[root@Master231 daemonsets]# kubectl apply -f 02-ds-yaml
daemonset.apps/koten-ds configured
[root@Master231 daemonsets]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
koten-ds-lxt2j   1/1     Running   0          14s   10.100.1.196   worker232   <none>           <none>
koten-ds-zv2jt   1/1     Running   0          9s    10.100.2.66    worker233   <none>           <none>

Job控制器

Job控制器通常是使用在一次性的任务中,例如数据备份、日志清理、定时任务等。任务完成后,Job 控制器会自动终止 Pod 的运行。企业中不咋常用其实,我总感觉它有限制,就是一般是启动Pod后,它就会去执行任务了,一个原本啥都没有的镜像,没啥需要清理和备份的。

一、案例展示

1、导入镜像,我这边准备了一个perl的镜像,里面引用了圆周率的类,可以输出圆周率

[root@Master231 svc]# docker push harbor.koten.com/koten-tools/perl:5.34

2、编写Job的资源清单去控制执行完command的任务就退出

[root@Master231 jobs]# cat 01-jobs.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: koten-pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: harbor.koten.com/koten-tools/perl:5.34
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  # 指定标记此作业失败之前的重试次数。默认为6.
  backoffLimit: 4

3、运行资源清单,可以看到它允许过后就Pod就处于了通过日志查看输出的2000位圆周率,通过vim文本编辑器查看的确是3后面2000位

[root@Master231 jobs]# kubectl get pods
NAME             READY   STATUS      RESTARTS   AGE
koten-pi-pf8lf   0/1     Completed   0          5m8s

[root@Master231 jobs]# kubectl logs -f koten-pi-pf8lf 
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

cronjob控制器

cronjob 控制器是 Kubernetes 中的一种控制器类型,用于定期执行任务。cronjob 控制器可以指定一个 cron 表达式,用于指定任务的执行时间,可以自动地创建和删除 Pod,以确保任务能够在指定的时间执行,底层会周期性的调用Job控制器。注意是周期调用Pod,不是Pod内周期执行,也不是同一Pod周期启动。

一、案例展示

1、编写cj资源清单

[root@Master231 cronjobs]# cat 01-cj.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: koten-cj
spec:
  # 定义调用的周期,分别表示: 分,时,日,月,周。每分钟创建一个Job控制器。
  schedule: "*/1 * * * *"
  # 定义Job控制器的模板
  jobTemplate:
    spec:
      template:
        spec:
          nodeName: worker233
          volumes:
          - name: data
            hostPath: 
              path: /koten-linux-cj
          containers:
          - name: c1
            image: harbor.koten.com/koten-linux/alpine:latest
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - name: data
              mountPath: /data
            command:
            - /bin/sh
            - -c
            - date >> /data/apps-cj.log
          restartPolicy: OnFailure

2、运行cj资源清单,等一分钟作用查看指定的工作节点挂载的Pod里面的log,同时查看运行的cj、job、Pod的状态,发现在整一分钟的时候,输出了日志,此时job也增加了,pod也生成并且完成了任务

[root@Master231 cronjobs]# kubectl apply -f 01-cj.yaml
cronjob.batch/koten-cj created

[root@Worker233 ~]# cat /koten-linux-cj/apps-cj.log 
Thu Jun 22 14:28:01 UTC 2023

[root@Master231 cronjobs]# kubectl get cj,job,pod
NAME                     SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/koten-cj   */1 * * * *   False     0        34s             94s

NAME                          COMPLETIONS   DURATION   AGE
job.batch/koten-cj-28124068   1/1           4s         34s

NAME                          READY   STATUS      RESTARTS   AGE
pod/koten-cj-28124068-7xqj4   0/1     Completed   0          34s

3、在第二分钟的时候观察,日志出现了新的,job有了新增,并且已经完成,之前的Pod依旧存在,Pod也新增并且状态是完成,之前的Pod依旧存在。依次类推,在后面呈现的状态会类似于第二分钟的状态。

[root@Worker233 ~]# cat /koten-linux-cj/apps-cj.log 
Thu Jun 22 14:28:01 UTC 2023
Thu Jun 22 14:29:01 UTC 2023

[root@Master231 cronjobs]# kubectl get cj,job,pod
NAME                     SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/koten-cj   */1 * * * *   False     0        40s             2m40s

NAME                          COMPLETIONS   DURATION   AGE
job.batch/koten-cj-28124068   1/1           4s         100s
job.batch/koten-cj-28124069   1/1           3s         40s

NAME                          READY   STATUS      RESTARTS   AGE
pod/koten-cj-28124068-7xqj4   0/1     Completed   0          100s
pod/koten-cj-28124069-hdmcs   0/1     Completed   0          40s

我是koten,10年运维经验,持续分享运维干货,感谢大家的阅读和关注!

  • 21
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

我是koten

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值