目录
一、控制器的定义
Kubernetes中内建了很多controller(控制器),这些相当于一个状态机,用来控制Pod的具体状态和行为。
二、控制器类型
- ReplicationController和ReplicaSet
- Deployment
- DaemonSet
- StatefulSet
- Job/CronJob
- Horizontal Pod Autoscaling
(1)ReplicationController 和 ReplicaSet
ReplicationController(RC)用来确保容器应用的副本数始终保持在用户定义的副本数,即如果有容器异常退出,会自动创建新的Pod来替代,而如果异常多出来的容器也会自动回收。在新版本的Kubernetes中建议使用ReplicaSet来取代ReplicationController。ReplicaSet跟ReplicationController没有本质的不同,只是名字不一样,并且ReplicaSet支持集合式的selector。
[root@k8s-master rs]# vim rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: myapp
image: hushensong.com/library/myapp:v1
env:
- name: GET_HOSTS_FROM
value: dns
#创建了标签为frontend的rs
[root@k8s-master rs]# kubectl create -f rs.yaml
replicaset.apps/frontend created
[root@k8s-master rs]# kubectl get rs
NAME DESIRED CURRENT READY AGE
frontend 3 3 3 8s
[root@k8s-master rs]# kubectl get pod
NAME READY STATUS RESTARTS AGE
frontend-hvjlv 1/1 Running 0 34s
frontend-mfg4x 1/1 Running 0 34s
frontend-tltzs 1/1 Running 0 34s
[root@k8s-master rs]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-hvjlv 1/1 Running 0 45s tier=frontend
frontend-mfg4x 1/1 Running 0 45s tier=frontend
frontend-tltzs 1/1 Running 0 45s tier=frontend
#修改其中的一个Pod的标签为frontend1,rs立马创建一个新的Pod
[root@k8s-master rs]# kubectl label pod frontend-hvjlv tier=frontend1
error: 'tier' already has a value (frontend), and --overwrite is false
[root@k8s-master rs]# kubectl label pod frontend-hvjlv tier=frontend1 --overwrite=True
pod/frontend-hvjlv labeled
[root@k8s-master rs]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-hvjlv 1/1 Running 0 2m58s tier=frontend1
frontend-mfg4x 1/1 Running 0 2m58s tier=frontend
frontend-sjgfn 1/1 Running 0 4s tier=frontend
frontend-tltzs 1/1 Running 0 2m58s tier=frontend
#删除rs,也只是会删除rs控制的标签为frontend的pod,其他pod状态不变
[root@k8s-master rs]# kubectl delete rs --all
replicaset.apps "frontend" deleted
[root@k8s-master rs]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-hvjlv 1/1 Running 0 3m42s tier=frontend1
frontend-mfg4x 0/1 Terminating 0 3m42s tier=frontend
frontend-sjgfn 0/1 Terminating 0 48s tier=frontend
frontend-tltzs 0/1 Terminating 0 3m42s tier=frontend
[root@k8s-master rs]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-hvjlv 1/1 Running 0 4m1s tier=frontend1
[root@k8s-master rs]# kubectl delete pod --all
pod "frontend-hvjlv" deleted
(2)Deployment
Deployment为Pod和ReplicaSet提供了一个声明式定义(declarative)方法,用来替代以前的ReplicationController来方便的管理应用。典型的应用场景包括:
- 定义Deployment来创建Pod和ReplicaSet
- 滚动升级和回滚应用
- 扩容和缩容
- 暂停和继续Deployment
[root@k8s-master deployment]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: hushensong.com/library/myapp:v1
ports:
- containerPort: 80
[root@k8s-master deployment]# kubectl apply -f deployment.yaml --record
deployment.apps/nginx-deployment created
#创建deployment的同时会创建RS
[root@k8s-master deployment]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 19s
[root@k8s-master deployment]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-6cc5d9ff74 3 3 3 30s
[root@k8s-master deployment]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6cc5d9ff74-5s5xc 1/1 Running 0 38s
nginx-deployment-6cc5d9ff74-bhcrl 1/1 Running 0 38s
nginx-deployment-6cc5d9ff74-qk4wj 1/1 Running 0 38s
[root@k8s-master deployment]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-6cc5d9ff74-5s5xc 1/1 Running 0 43s 10.244.1.12 k8s-node01 <none> <none>
nginx-deployment-6cc5d9ff74-bhcrl 1/1 Running 0 43s 10.244.1.11 k8s-node01 <none> <none>
nginx-deployment-6cc5d9ff74-qk4wj 1/1 Running 0 43s 10.244.2.9 k8s-node02 <none> <none>
[root@k8s-master deployment]# curl 10.244.1.12
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master deployment]# curl 10.244.1.11
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master deployment]# curl 10.244.2.9
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
# 对pod进行扩容
[root@k8s-master deployment]# kubectl scale deployment nginx-deployment --replicas=10
deployment.apps/nginx-deployment scaled
[root@k8s-master deployment]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 10/10 10 10 2m45s
[root@k8s-master deployment]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-6cc5d9ff74 10 10 10 2m49s
[root@k8s-master deployment]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6cc5d9ff74-5s5xc 1/1 Running 0 2m21s
nginx-deployment-6cc5d9ff74-6zg9d 1/1 Running 0 6s
nginx-deployment-6cc5d9ff74-8m9jn 1/1 Running 0 6s
nginx-deployment-6cc5d9ff74-9gbdk 1/1 Running 0 6s
nginx-deployment-6cc5d9ff74-bhcrl 1/1 Running 0 2m21s
nginx-deployment-6cc5d9ff74-jdmd6 1/1 Running 0 6s
nginx-deployment-6cc5d9ff74-lssbc 1/1 Running 0 6s
nginx-deployment-6cc5d9ff74-pv9m4 1/1 Running 0 6s
nginx-deployment-6cc5d9ff74-qk4wj 1/1 Running 0 2m21s
nginx-deployment-6cc5d9ff74-wzd46 1/1 Running 0 6s
# 对pod镜像升级操作
[root@k8s-master deployment]# kubectl set image deployment/nginx-deployment nginx=hushensonglinux/myapp:v2
deployment.apps/nginx-deployment image updated
#升级时,会再创建一个rs
[root@k8s-master deployment]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-6cc5d9ff74 0 0 0 4m28s
nginx-deployment-b4f989c86 10 10 8 27s
[root@k8s-master deployment]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-b4f989c86-2nghl 1/1 Running 0 23s 10.244.1.20 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-4w7tt 1/1 Running 0 43s 10.244.2.14 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-flnb7 1/1 Running 0 23s 10.244.2.18 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-hkg5d 1/1 Running 0 26s 10.244.2.17 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-hxkwl 1/1 Running 0 25s 10.244.1.19 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-mzkhv 1/1 Running 0 43s 10.244.2.15 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-qsmqg 1/1 Running 0 43s 10.244.1.17 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-rxhmb 1/1 Running 0 29s 10.244.1.18 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-x9msp 1/1 Running 0 43s 10.244.1.16 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-zj5m9 1/1 Running 0 43s 10.244.2.16 k8s-node02 <none> <none>
[root@k8s-master deployment]# curl 10.244.1.20
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
#回滚操作
[root@k8s-master deployment]# kubectl rollout undo deployment/nginx-deployment
deployment.apps/nginx-deployment rolled back
[root@k8s-master deployment]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-6cc5d9ff74-4f9t5 1/1 Running 0 14s 10.244.2.23 k8s-node02 <none> <none>
nginx-deployment-6cc5d9ff74-6lgkx 1/1 Running 0 15s 10.244.2.22 k8s-node02 <none> <none>
nginx-deployment-6cc5d9ff74-b262j 1/1 Running 0 15s 10.244.1.24 k8s-node01 <none> <none>
nginx-deployment-6cc5d9ff74-d9hmk 1/1 Running 0 17s 10.244.2.20 k8s-node02 <none> <none>
nginx-deployment-6cc5d9ff74-hbwz4 1/1 Running 0 14s 10.244.1.25 k8s-node01 <none> <none>
nginx-deployment-6cc5d9ff74-kl548 1/1 Running 0 17s 10.244.2.21 k8s-node02 <none> <none>
nginx-deployment-6cc5d9ff74-p9ndb 1/1 Running 0 17s 10.244.1.21 k8s-node01 <none> <none>
nginx-deployment-6cc5d9ff74-q7jbs 1/1 Running 0 16s 10.244.1.23 k8s-node01 <none> <none>
nginx-deployment-6cc5d9ff74-qcw2z 1/1 Running 0 17s 10.244.2.19 k8s-node02 <none> <none>
nginx-deployment-6cc5d9ff74-r92tx 1/1 Running 0 17s 10.244.1.22 k8s-node01 <none> <none>
[root@k8s-master deployment]# curl 10.244.2.23
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
#查看回滚历史记录
[root@k8s-master deployment]# kubectl rollout history deployment/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
2 kubectl apply --filename=deployment.yaml --record=true
3 kubectl apply --filename=deployment.yaml --record=true
[root@k8s-master deployment]# kubectl set image deployment/nginx-deployment nginx=hushensonglinux/myapp:v3
deployment.apps/nginx-deployment image updated
[root@k8s-master deployment]# kubectl rollout history deployment/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
2 kubectl apply --filename=deployment.yaml --record=true
3 kubectl apply --filename=deployment.yaml --record=true
4 kubectl apply --filename=deployment.yaml --record=true
#回滚到指定版本
[root@k8s-master deployment]# kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment.apps/nginx-deployment rolled back
[root@k8s-master deployment]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-b4f989c86-6tvz5 1/1 Running 0 10s 10.244.2.30 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-7khbg 1/1 Running 0 10s 10.244.1.31 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-98tzh 1/1 Running 0 8s 10.244.1.34 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-9qvxw 1/1 Running 0 10s 10.244.1.32 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-bb47z 1/1 Running 0 10s 10.244.2.31 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-kl7zr 1/1 Running 0 8s 10.244.2.33 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-n6gmv 1/1 Running 0 10s 10.244.2.29 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-qlngd 1/1 Running 0 8s 10.244.1.35 k8s-node01 <none> <none>
nginx-deployment-b4f989c86-vkktf 1/1 Running 0 8s 10.244.2.32 k8s-node02 <none> <none>
nginx-deployment-b4f989c86-xqzhl 1/1 Running 0 8s 10.244.1.33 k8s-node01 <none> <none>
[root@k8s-master deployment]# curl 10.244.2.30
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master deployment]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 10/10 10 10 11m
[root@k8s-master deployment]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-6cc5d9ff74 0 0 0 11m
nginx-deployment-b4f989c86 10 10 10 7m48s
nginx-deployment-fdcc8c6df 0 0 0 104s
(3)DaemonSet
DaemonSet确保全部(或者一些)Node上运行一个Pod的副本。当有Node加入集群时,也会为它们新增一个Pod。当有Node从集群移除时,这些Pod也会被回收。删除DaemonSet将会删除它创建的所有Pod。
使用DaemonSet的一些典型用法:
- 运行集群存储daemon,例如在每个Node上运行glusterd、ceph
- 在每个Node上运行日志收集daemon,例如fluentd、logstash
- 在每个Node上运行监控daemon,例如Prometheus Node Exporter、collectd、Datadog代理、New Relic代理,或Ganglia gmond
[root@k8s-master daemonset]# vim daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example
labels:
app: daemonset
spec:
selector:
matchLabels:
name: daemonset-example
template:
metadata:
labels:
name: daemonset-example
spec:
containers:
- name: daemonset-example
image: hushensonglinux/myapp:v3
[root@k8s-master daemonset]# kubectl create -f daemonset.yaml
daemonset.apps/daemonset-example created
#由于有两个node节点,所以会有两个期待值,分布在两个节点上
[root@k8s-master daemonset]# kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset-example 2 2 2 2 2 <none> 11s
[root@k8s-master daemonset]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-example-2n5sz 1/1 Running 0 34s 10.244.1.36 k8s-node01 <none> <none>
daemonset-example-gtd4b 1/1 Running 0 34s 10.244.2.34 k8s-node02 <none> <none>
当删除node01上的的pod时,会再node01上再创建个pod
[root@k8s-master daemonset]# kubectl delete pod daemonset-example-2n5sz
pod "daemonset-example-2n5sz" deleted
[root@k8s-master daemonset]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-example-dc8fc 1/1 Running 0 3s 10.244.1.37 k8s-node01 <none> <none>
daemonset-example-gtd4b 1/1 Running 0 80s 10.244.2.34 k8s-node02 <none> <none>
(4)Job
Job负责批处理任务,即仅执行一次的任务,它保证批处理任务的一个或多个Pod成功结束
[root@k8s-master job]# vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl","-Mbignum=bpi","-wle","print bpi(2000)"]
restartPolicy: Never
[root@k8s-master job]# kubectl create -f job.yaml
job.batch/pi created
[root@k8s-master job]# kubectl get job
NAME COMPLETIONS DURATION AGE
pi 1/1 9s 15s
[root@k8s-master job]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pi-5ztzs 0/1 Completed 0 25s
[root@k8s-master job]# kubectl logs pi-5ztzs
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
(5)CronJob
CronJob管理基于时间的Job,即:
- 在给定时间点只运行一次
- 周期性地在给定时间点运行
典型的用法如下所示:
- 在给定的时间点调度Job运行
- 创建周期性运行的Job,例如:数据库备份、发送邮件
[root@k8s-master cronjob]# vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:v1
args:
- /bin/sh
- -c
- date;echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
[root@k8s-master cronjob]# kubectl apply -f cronjob.yaml
cronjob.batch/hello created
#每分钟执行一次
[root@k8s-master cronjob]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-1615906260-w9t54 0/1 Completed 0 31s
[root@k8s-master cronjob]# kubectl logs hello-1615906260-w9t54
Tue Mar 16 14:51:02 UTC 2021
Hello from the Kubernetes cluster
[root@k8s-master cronjob]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-1615906260-w9t54 0/1 Completed 0 76s
hello-1615906320-s5pbd 0/1 Completed 0 15s
(6)StatefulSet
StatefulSet作为Controller为Pod提供唯一的标识。它可以保证部署和scale的顺序
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为了无状态服务而设计),其应用场景包括:
- 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
- 稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
- 有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running 和 Ready状态),基于init containers来实现
- 有序收缩,有序删除(即从N-1到0)
(7)Horizontal Pod Autoscaling
应用的资源使用率通常都有高峰和低谷的时候,如何削峰填谷,提供集群的整体资源利用率,让service中的Pod个数自动调整呢?这就有赖于Horizontal Pod Autoscaling了,顾名思义,使Pod水平自动缩放。
Hello,大家好,我是菜鸟HSS,始终坚信没有天生的高手,更没有永远的菜鸟。专注于Linux云计算和大数据的学习和研究。欢迎扫码关注我的公众号「菜鸟自学大数据Hadoop」,本公众号一切分享、资源皆免费,希望能尽自己一己之力帮助到更多的朋友,同时自己的技术也能得到提升,达到一种双赢的效果。