目录
一 什么是控制器
官方文档:工作负载管理 | Kubernetes
控制器也是管理pod的一种手段
-
自主式pod:pod退出或意外关闭后不会被重新创建
-
控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目
Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod
当建立控制器后,会把期望值写入etcd,k8s中的apiserver检索etcd中我们保存的期望状态,并对比pod的当前状态,如果出现差异代码自驱动立即恢复
二 控制器常用类型
控制器名称 | 控制器用途 |
---|---|
Replication Controller | 比较原始的pod控制器,已经被废弃,由ReplicaSet替代 |
ReplicaSet | ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行 |
Deployment | 一个 Deployment 为 Pod 和 ReplicaSet 提供声明式的更新能力 |
DaemonSet | DaemonSet 确保全指定节点上运行一个 Pod 的副本 |
StatefulSet | StatefulSet 是用来管理有状态应用的工作负载 API 对象。 |
Job | 执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束 |
CronJob | Cron Job 创建基于时间调度的 Jobs。 |
HPA全称Horizontal Pod Autoscaler | 根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放 |
三 replicaset控制器
3.1 replicaset功能
-
ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet
-
ReplicaSet和Replication Controller的唯一区别是选择器的支持,ReplicaSet支持新的基于集合的选择器需求
-
ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行
-
虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制
3.2 replicaset参数说明
参数名称 | 字段类型 | 参数说明 |
---|---|---|
spec | Object | 详细定义对象,固定值就写Spec |
spec.replicas | integer | 指定维护pod数量 |
spec.selector | Object | Selector是对pod的标签查询,与pod数量匹配 |
spec.selector.matchLabels | string | 指定Selector查询标签的名称和值,以key:value方式指定 |
spec.template | Object | 指定对pod的描述信息,比如lab标签,运行容器的信息等 |
spec.template.metadata | Object | 指定pod属性 |
spec.template.metadata.labels | string | 指定pod标签 |
spec.template.spec | Object | 详细定义对象 |
spec.template.spec.containers | list | Spec对象的容器列表定义 |
spec.template.spec.containers.name | string | 指定容器名称 |
spec.template.spec.containers.image | string | 指定容器镜像 |
3.3 replicaset 示例
#生成yml文件
[root@k8s-master ~]# kubectl create deployment replicaset --image myapp:v1 --dry-run=client -o yaml > replicaset.yml
[root@k8s-master ~]# vim replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset #指定pod名称,一定小写,如果出现大写报错
spec:
replicas: 2 #指定维护pod数量为2
selector: #指定检测匹配方式
matchLabels: #指定匹配方式为匹配标签
app: myapp #指定匹配的标签为app=myapp
template: #模板,当副本数量不足时,会根据下面的模板创建pod副本
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f replicaset.yml
replicaset.apps/replicaset created
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-nsnhd 1/1 Running 0 45s app=myapp
replicaset-tnr4b 1/1 Running 0 4m4s app=myapp
#replicaset是通过标签匹配pod
[root@k8s-master ~]# kubectl label pod replicaset-nsnhd app=superhowe --overwrite
pod/replicaset-nsnhd labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-nsnhd 1/1 Running 0 2m43s app=superhowe
replicaset-tnr4b 1/1 Running 0 6m2s app=myapp
replicaset-xstsn 1/1 Running 0 2s app=myapp
#恢复标签后
[root@k8s-master ~]# kubectl label pod replicaset-nsnhd app-
pod/replicaset-nsnhd unlabeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-nsnhd 1/1 Running 0 3m21s <none>
replicaset-tnr4b 1/1 Running 0 6m40s app=myapp
replicaset-xstsn 1/1 Running 0 40s app=myapp
#replicaset自动控制副本数量,pod可以自愈
[root@k8s-master ~]# kubectl delete pods replicaset-nsnhd pod "replicaset-nsnhd" deleted
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-tnr4b 1/1 Running 0 10m app=myapp
replicaset-xstsn 1/1 Running 0 4m11s app=myapp
#回收资源
[root@k8s-master ~]# kubectl delete -f replicaset.yml
replicaset.apps "replicaset" deleted
四 deployment 控制器
4.1 deployment控制器的功能
-
为了更好的解决服务编排的问题,kubernetes在V1.2版本开始,引入了Deployment控制器。
-
Deployment控制器并不直接管理pod,而是通过管理ReplicaSet来间接管理Pod
-
Deployment管理ReplicaSet,ReplicaSet管理Pod
-
Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法
-
在Deployment中ReplicaSet相当于一个版本
典型的应用场景:
-
用来创建Pod和ReplicaSet
-
滚动更新和回滚
-
扩容和缩容
-
暂停与恢复
4.2 deployment控制器示例
#生成yaml文件
[root@k8s-master ~]# kubectl create deployment deployment --image myapp:v1 --dry-run=client -o yaml > deployment.yml
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
#建立pod
root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created
#查看pod信息
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
deployment-5d886954d4-dlvgl 1/1 Running 0 15s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-dv7xj 1/1 Running 0 15s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-mjk9l 1/1 Running 0 15s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-rg75l 1/1 Running 0 15s app=myapp,pod-template-hash=5d886954d4
4.2.1 版本迭代
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-5d886954d4-dlvgl 1/1 Running 0 2m20s 10.244.2.12 k8s-node2.exam.com <none> <none>
deployment-5d886954d4-dv7xj 1/1 Running 0 2m20s 10.244.1.13 k8s-node1.exam.com <none> <none>
deployment-5d886954d4-mjk9l 1/1 Running 0 2m20s 10.244.2.11 k8s-node2.exam.com <none> <none>
deployment-5d886954d4-rg75l 1/1 Running 0 2m20s 10.244.1.14 k8s-node1.exam.com <none> <none>
#pod运行容器版本为v1
[root@k8s-master ~]# curl 10.244.2.12
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.244.2.11
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# kubectl describe deployments.apps deployment
Name: deployment
Namespace: default
CreationTimestamp: Thu, 05 Sep 2024 14:34:08 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=myapp
Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge #默认每次更新25%
#更新容器运行版本
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v2 ##更新为版本2
name: myapp
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment configured
#更新过程
[root@k8s-master ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
deployment-7f4786db9c-5587r 1/1 Running 0 2m54s
deployment-7f4786db9c-df2hl 1/1 Running 0 2m54s
deployment-7f4786db9c-nv5kz 1/1 Running 0 2m54s
deployment-7f4786db9c-rp75p 1/1 Running 0 2m54s
#测试更新效果
[root@k8s-master ~]kubectl get pods -o widede
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-7f4786db9c-5587r 1/1 Running 0 3m44s 10.244.1.18 k8s-node1.exam.com <none> <none>
deployment-7f4786db9c-df2hl 1/1 Running 0 3m44s 10.244.2.16 k8s-node2.exam.com <none> <none>
deployment-7f4786db9c-nv5kz 1/1 Running 0 3m44s 10.244.2.15 k8s-node2.exam.com <none> <none>
deployment-7f4786db9c-rp75p 1/1 Running 0 3m44s 10.244.1.17 k8s-node1.exam.com <none> <none>
[root@k8s-master ~]# curl 10.244.1.18
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.244.2.16
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.244.2.15
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.244.1.17
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Note:更新的过程是重新建立一个版本的RS,新版本的RS会把pod 重建,然后把老版本的RS回收
4.2.2 版本回滚
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1 #回滚到之前版本
name: myapp
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment configured
#测试回滚效果
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-5d886954d4-2z8hx 1/1 Running 0 8s 10.244.1.20 k8s-node1.exam.com <none> <none>
deployment-5d886954d4-5tjkd 1/1 Running 0 10s 10.244.1.19 k8s-node1.exam.com <none> <none>
deployment-5d886954d4-g6xrb 1/1 Running 0 9s 10.244.2.18 k8s-node2.exam.com <none> <none>
deployment-5d886954d4-xcfmr 1/1 Running 0 10s 10.244.2.17 k8s-node2.exam.com <none> <none>
[root@k8s-master ~]# curl 10.244.1.20
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.244.1.19
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.244.2.18
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.244.2.17
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
4.2.3 滚动更新策略
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
minReadySeconds: 5 #最小就绪时间,指定pod每隔多久更新一次
replicas: 4
strategy: #指定更新策略
rollingUpdate:
maxSurge: 1 #比定义pod数量多几个
maxUnavailable: 0 #比定义pod个数少几个
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created
[root@k8s-master ~]# kubectl describe deployments.apps deployment
[root@k8s-master ~]# kubectl get pods -o wide
4.2.4 暂停及恢复
在实际生产环境中我们做的变更可能不止一处,当修改了一处后,如果执行变更就直接触发了
我们期望的触发时当我们把所有修改都搞定后一次触发
暂停,避免触发不必要的线上更新
kubectl create deployment deployment-example --image myapp:v1 --dry-run=client -o yaml > deployment.yml
[root@k8s2 pod]# kubectl rollout pause deployment deployment-example
[root@k8s2 pod]# vim deployment-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
minReadySeconds: 5
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
replicas: 6
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: nginx
resources:
limits:
cpu: 0.5
memory: 200Mi
requests:
cpu: 0.5
memory: 200Mi
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment-example created
#调整副本数,不受影响
[root@k8s-master ~]# kubectl describe pods deployment-example-86665b6fdb-6fdvx
#但是更新镜像和修改资源并没有触发更新
[root@k8s2 pod]# kubectl rollout history deployment deployment-example
#恢复后开始触发更新
[root@k8s-master ~]# kubectl rollout resume deployment deployment-example
[root@k8s-master ~]# kubectl rollout history deployment deployment-example
deployment.apps/deployment-example
REVISION CHANGE-CAUSE
1 <none>
#回收
[root@k8s-master ~]# kubectl delete -f deployment.yml
五 daemonset控制器
5.1 daemonset功能
DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod ,当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod
DaemonSet 的典型用法:
-
在每个节点上运行集群存储 DaemonSet,例如 glusterd、ceph。
-
在每个节点上运行日志收集 DaemonSet,例如 fluentd、logstash。
-
在每个节点上运行监控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等
-
一个简单的用法是在所有的节点上都启动一个 DaemonSet,将被作为每种类型的 daemon 使用
-
一个稍微复杂的用法是单独对每种 daemon 类型使用多个 DaemonSet,但具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求
5.2 daemonset 示例
[root@k8s-master ~]# vim daemonset-dm.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
tolerations: #对于污点节点的容忍
- effect: NoSchedule
operator: Exists
containers:
- name: nginx
image: nginx
[root@k8s-master ~]# kubectl apply -f daemonset-dm.yml
daemonset.apps/daemonset-example created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-example-9z2vp 1/1 Running 0 16s 10.244.1.23 k8s-node1.exam.com <none> <none>
daemonset-example-jdrcg 1/1 Running 0 16s 10.244.0.4 k8s-master.exam.com <none> <none>
daemonset-example-l7bhn 1/1 Running 0 16s 10.244.2.21 k8s-node2.exam.com <none> <none>
六 job 控制器
6.1 job控制器功能
Job,主要用于负责批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)任务
Job特点如下:
-
当Job创建的pod执行成功结束时,Job将记录成功结束的pod数量
-
当成功结束的pod达到指定的数量时,Job将完成执行
6.2 job 控制器示例
[root@k8s-master ~]# kubectl create job howejob --image perl-5.34.0 --dry-run=client -o yaml > job.yml
[root@k8s-master ~]# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
completions: 6 #一共完成任务数为6
parallelism: 2 #每次并行完成2个
template:
spec:
containers:
- name: perl
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never #关闭后不自动重启
backoffLimit: 4 #运行失败后尝试4重新运行
[root@k8s-master ~]# kubectl apply -f job.yml
job.batch/pi created
[root@k8s-master ~]# kubectl get pods
pi-2rccn 0/1 Completed 0 8s
pi-62gnh 0/1 Completed 0 45s
pi-brqch 0/1 Completed 0 15s
pi-jm4gc 0/1 Completed 0 15s
pi-kw2h6 0/1 Completed 0 8s
pi-nx5ns 0/1 Completed 0 46s
Note:关于重启策略设置的说明:
如果指定为OnFailure,则job会在pod出现故障时重启容器
而不是创建pod,failed次数不变
如果指定为Never,则job会在pod出现故障时创建新的pod
并且故障pod不会消失,也不会重启,failed次数加1
如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了
七 cronjob 控制器
7.1 cronjob 控制器功能
-
Cron Job 创建基于时间调度的 Jobs。
-
CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,
-
CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点及重复运行的方式。
-
CronJob可以在特定的时间点(反复的)去运行job任务。
7.2 cronjob 控制器 示例
[root@k8s-master ~]# kubectl create cronjob hello --image perl-5.34.0 --dry-run=client -o yaml > cronjob.yml
[root@k8s-master ~]# vim cronjob.yml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
[root@k8s-master ~]# kubectl apply -f cronjob.yml
cronjob.batch/hello created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-28759022-pdxn8 0/1 Completed 0 22s
#观察 出现一个就回收一个
[root@k8s-master ~]# watch -n 1 kubectl get pods