1.1DaemonSet是什么
Deployment部署的副本Pod会分布在各个Node上,每个Node都可能运行好几个副本。DaemonSet的不同之处在于:每个Node上最多只能运行⼀个副本。DaemonSet的典型应用场景有:
(1)在集群的每个节点上运⾏存储Daemon,比如glusterd或ceph。
(2)在每个节点上运行日志收集Daemon,比如flunentd或logstash。
(3)在每个节点上运行监控Daemon,比如PrometheusNodeExporter或collectd。
其实Kubernetes自己就在用DaemonSet运行系统组件。执行如下命令,如下所示。
kubectl get daemonset --namespace=kube-system
kubectl get daemonset --namespace=kube-flannel
[root@k8s-master ~]# kubectl get daemonset --namespace=kube-flannel
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds 3 3 3 3 3 <none> 10d
[root@k8s-master ~]# kubectl get daemonset --namespace=kube-flannel
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds 3 3 3 3 3 <none> 10d
DaemonSet kube-flannel-ds和kube-proxy分别负责在每个节点上运⾏flannel和kube-proxy组件,在通过查看Pod副本,看看各个节点的分布情况:
kubectl get pod --namespace=kube-system -o wide
[root@k8s-master ~]# kubectl get pod --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7ff77c879f-ltqn4 1/1 Running 2 10d 10.244.0.8 k8s-master <none> <none>
coredns-7ff77c879f-xb6qm 1/1 Running 2 10d 10.244.0.9 k8s-master <none> <none>
etcd-k8s-master 1/1 Running 2 10d 192.168.200.128 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 2 10d 192.168.200.128 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 2 10d 192.168.200.128 k8s-master <none> <none>
kube-proxy-dcprp 1/1 Running 2 10d 192.168.200.128 k8s-master <none> <none>
kube-proxy-nklr5 1/1 Running 2 10d 192.168.200.129 k8s-node1 <none> <none>
kube-proxy-slslc 1/1 Running 4 10d 192.168.200.130 k8s-node2 <none> <none>
kube-scheduler-k8s-master 1/1 Running 2 10d 192.168.200.128 k8s-master <none> <none>
metrics-server-7f6b85b597-k6b26 1/1 Running 2 10d 10.244.1.29 k8s-node1 <none> <none>
1.2 DaemonSet的创建和运行
本小节以Prometheus Node Exporter为例演示用户如何运行自己的DaemonSet。Prometheus 是 流行的系统监控方案 , Node Exporter 是Prometheus的agent,以Daemon的形式运行在每个被监控节点
上。
vi node-exporter.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter-daemonset
namespace: agent
spec:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
hostNetwork: true
containers:
- name: node-exporter
image: prom/node-exporter
imagePullPolicy: IfNotPresent
command:
- /bin/node_exporter
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- ^/(sys|proc|dev|host|etc)($|/)
volumeMounts:
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: root
mountPath: /rootfs
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
- name: root
hostPath:
path: /
先创建个命名空间
kubectl create namespace agent
通过kubectl创建资源:
kubectl apply -f node-exporter.yaml
然后,通过kubectl查看Pod分布情况:
[root@k8s-master ~]# kubectl get pod --namespace=agent -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-exporter-daemonset-5rrq4 1/1 Running 1 44h 192.168.200.129 k8s-node1 <none> <none>
node-exporter-daemonset-hjkzs 1/1 Running 1 44h 192.168.200.130 k8s-node2 <none> <none>
2.1 Job
容器按照持续运行的时间可分为两类:服务类容器和工作类容器。
服务类容器通常持续提供服务,需要⼀直运行,比如HTTPServer、Daemon等。工作类容器则是⼀次性任务,比如批处理程序,完成后容器就退出。
Kubernetes的Deployment、ReplicaSet和DaemonSet都用于管理服务类容器;
对于工作类容器,我们使用Job。先看⼀个简单的Job配置文件件myjob.yml。
vi myjob.yml
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
namespace: jobs
spec:
template:
metadata:
labels:
app: myjob
spec:
containers:
- name: hello-job
image: busybox
imagePullPolicy: IfNotPresent
command: ["echo", "hello k8s job!"]
restartPolicy: Never
kubectl apply -f myjob.yml
通过查看Job运行情况可以知道,其运行结束就结束了,如下图所示,变成了Completed状态。
kubectl get pod -n jobs
[root@k8s-master ~]# kubectl get pod -n jobs
NAME READY STATUS RESTARTS AGE
myjob-krgdh 0/1 Completed 0 44h
还可以通过查看Log看看这个Job留下的足迹:
kubectl logs myjob-krgdh -n jobs #查看日志
kubectl describe pods/myjob-krgdh --namespace jobs #查看job描述
ng)
[root@k8s-master ~]# kubectl logs myjob-krgdh -n jobs
hello k8s job!
[root@k8s-master ~]# kubectl describe pods/myjob-krgdh --namespace jobs
Name: myjob-krgdh
Namespace: jobs
Priority: 0
Node: k8s-node2/192.168.200.130
Start Time: Sat, 27 Jan 2024 16:53:58 +0800
Labels: app=myjob
controller-uid=1a312a0b-b6eb-4219-958f-801edc224608
job-name=myjob
Annotations: <none>
Status: Succeeded
IP: 10.244.2.14
IPs:
IP: 10.244.2.14
Controlled By: Job/myjob
Containers:
hello-job:
Container ID: docker://0403ccea759585e45f39431bf3958a169211210ef948a0b7c1d4be1ed411e272
Image: busybox
Image ID: docker-pullable://busybox@sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678
Port: <none>
Host Port: <none>
Command:
echo
hello k8s job!
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 27 Jan 2024 16:53:59 +0800
Finished: Sat, 27 Jan 2024 16:53:59 +0800
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vz4wm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-vz4wm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vz4wm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
2.2 并行Job
如果希望能够同时并行运行多个Pod以提高Job的执行效率,Job提供了一个贴心的配置:parallesim。例如下面的配置,我们将上面的小Job改为并行运行的Pod数量设置为3。
修改 myjob.yml
apiVersion: batch/v1
kind: Job
metadata:
name: testjob
spec:
parallelism: 3
template:
metadata:
labels:
app: testjob
spec:
containers:
- name: hello-job
image: busybox
imagePullPolicy: IfNotPresent
command: ["echo", "hello k8s job!"]
restartPolicy: Never
kubectl get pod -o wide -n default
可以看出,Job一共启动了3个Pod,都是同时结束的(可以看到三个Pod的AGE都是相同的)。
[root@k8s-master ~]# kubectl get pod -o wide -n default
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testjob-jmlzm 0/1 Completed 0 44h 10.244.2.18 k8s-node2 <none> <none>
testjob-r62f4 0/1 Completed 0 44h 10.244.2.17 k8s-node2 <none> <none>
testjob-vg7c6 0/1 Completed 0 44h 10.244.2.16 k8s-node2 <none> <none>
2.3 定时Job
Linux中有cron程序定时执行任务,Kubernetes的CronJob提供了类似的功能,可以定时执行Job。
vi hello-cron-job.yml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello-cron-job
namespace: jobs
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello-cron-job
image: busybox
imagePullPolicy: IfNotPresent
command: ["echo", "hello edison's k8s cron job!"]
restartPolicy: OnFailure
上面加粗的配置是CronJob的独有配置,需要注意的是schedule,它的格式和Linux Cron一样,这里的"*/1 * * * *"代表每一分钟启动执行一次。对于CronJob,它需要的是jobTemplate来定义Job的模板。
kubectl get cronjob -n jobs
[root@k8s-master ~]# kubectl get cronjob -n jobs
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello-cron-job */1 * * * * False 0 43h 43h
查看job
可以看到,在过去的三分钟里,每一分钟都启动了一个Pod,符合预期。
kubectl get jobs -n jobs
[root@k8s-master ~]# kubectl get jobs -n jobs
NAME COMPLETIONS DURATION AGE
hello-cron-job-1706350440 1/1 1s 43h
hello-cron-job-1706350500 1/1 1s 43h
hello-cron-job-1706350560 1/1 1s 43h