Daemonset与Job
Daemonset
一个DaemonSet确保了所有的node上仅有一个的Pod的一个实例。当node被添加到集群中,Pod也被添加上去。当node被从集群移除,这些Pod会被垃圾回收。删除一个DaemonSet将会清理它创建的Pod
实例1:busybox
vim busybox.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: busybox-daemonset
spec:
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busytest
image: busybox
command: ["sh","-c","while true;do echo 'hello';sleep 10;done;"]
运行
kubectl apply -f busybox.yml
daemonset.apps/busybox-daemonset unchanged
查看
kubectl get daemonset.apps/busybox-daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
busybox-daemonset 2 2 2 2 2 <none> 96m
查看pod
kubectl get pod
NAME READY STATUS RESTARTS AGE
busybox-daemonset-4qhm4 1/1 Running 0 97m
busybox-daemonset-kvfgh 1/1 Running 0 97m
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox-daemonset-4qhm4 1/1 Running 0 97m 10.244.3.22 node2 <none> <none>
busybox-daemonset-kvfgh 1/1 Running 0 97m 10.244.1.32 node1 <none
可以看出每个node节点上都只有一个pod实例
删除
kubectl delete daemonset.apps/busybox-daemonset
daemonset.apps "busybox-daemonset" deleted
再次查看pod
kubectl get pod -o wide
No resources found in default namespace.
实例2:node-exporter
这个实例涉及到目录挂载,本质上与上个实例没有区别
vim protheus.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: protheus-daemonset
spec:
selector:
matchLabels:
app: pro
template:
metadata:
labels:
app: pro
spec:
hostNetwork: true
containers:
- name: pro-test
image: prom/node-exporter
command: ##指定运行的命令
- /bin/node_exporter
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- ^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/devicemapper|rootfs/var/lib/docker/aufs)($$|/)
volumeMounts: ##指定挂载到应用的目录
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: root
mountPath: /rootfs
volumes: ##指定应用内部被挂载目录
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
- name: root
hostPath:
path: /sys
- name: root
hostPath:
path: /
运行
kubectl apply -f mydaemonset/protheus.yml
daemonset.apps/protheus-daemonset created
查看pod
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
protheus-daemonset-jk4tv 1/1 Running 0 20s 192.168.1.30 node2 <none> <none>
protheus-daemonset-zp66b 1/1 Running 0 20s 192.168.1.20 node1 <none> <none>
Job
Job控制器也是Kubernetes中的一个重要的控制器资源,但是它和Deployment、DaemonSet不同的是:Job控制器用于调配pod对象中的运行一次性的任务。
支持两种重启策略:
- OnFailure:在出现故障时其内部重启容器,而不是创建。
- Never:会在出现故障时创建新的,且故障job不会消失。
实例1:Never
vim test1-job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: test-job
spec:
template:
metadata:
name: test-job
spec:
containers:
- name: hello
image: busybox
command: ["echo","test"]
restartPolicy: Never ##重启类型 永不重启
运行
kubectl apply -f test1-job.yml
job.batch/test-job created
查看
kubectl get pod
NAME READY STATUS RESTARTS AGE
test-job-rblzt 0/1 Completed 0 2m29s
因为是一次性命令,执行完后就不在运行,可以去日志查看
kubectl logs test-job-rblzt
test
修改yml配置文件,故意添加一个错误,看下反应
先删除pod
kubectl delete job.batch/test-job
修改配置文件
vim test1-job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: test-job
spec:
template:
metadata:
name: test-job
spec:
containers:
- name: hello
image: busybox
command: ["xxx","test"] ##echo 更改成错误命令
restartPolicy: Never
运行
kubectl apply -f test1-job.yml
job.batch/test-job created
查看
kubectl get pod
NAME READY STATUS RESTARTS AGE
test-job-phwxb 0/1 ContainerCannotRun 0 51s
test-job-t65gc 0/1 ContainerCannotRun 0 41s
test-job-vrf4p 0/1 ContainerCannotRun 0 53s
test-job-wkpw5 0/1 ContainerCreating 0 1s
因为设定是Never选项,所以不会重启,当遇到错误时会重新创建直到完成
实例2:OnFailure
vim test2-job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: test-job
spec:
template:
metadata:
name: test-job
spec:
containers:
- name: hello
image: busybox
command: ["xxx","test"]
restartPolicy: OnFailure ##重启类型
运行
kubectl apply -f test2-job.yml
job.batch/test-job created
查看
kubectl get pod
NAME READY STATUS RESTARTS AGE
test-job-rzg9m 0/1 RunContainerError 1 41s
重启数为1,因为执行命令错误,所以会一直重启下去
实例2.1:设置pod的完成数
vim test2-job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: test-job
spec:
completions: 6 ##开启数量
parallelism: 2 ##每次开启个数
template:
metadata:
name: test-job
spec:
containers:
- name: hello
image: busybox
command: ["echo","test"]
restartPolicy: OnFailure
运行
kubectl apply -f test2-job.yml
job.batch/test-job created
查看
kubectl get pod
NAME READY STATUS RESTARTS AGE
test-job-bm7jv 0/1 ContainerCreating 0 15s
test-job-lb9hm 0/1 ContainerCreating 0 15s
kubectl get pod
NAME READY STATUS RESTARTS AGE
test-job-bm7jv 0/1 Completed 0 17s
test-job-gtsdz 0/1 ContainerCreating 0 1s
test-job-j4l8n 0/1 ContainerCreating 0 1s
test-job-lb9hm 0/1 Completed 0 17s
kubectl get pod
NAME READY STATUS RESTARTS AGE
test-job-86f4f 0/1 Completed 0 26s
test-job-bm7jv 0/1 Completed 0 44s
test-job-gtsdz 0/1 Completed 0 28s
test-job-j4l8n 0/1 Completed 0 28s
test-job-lb9hm 0/1 Completed 0 44s
test-job-tr72v 0/1 Completed 0 26s
从时间可以看出,一次开启两个,一共开启6个
Cronjob
与job不同的是Cron可以通过时间控制pod
实例:每分钟创建
vim cro-job.yml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test-job
spec:
schedule: "*/1 * * * *" ##时间模块 分 时 日 月 周
jobTemplate: ##定义job模板
spec:
template:
spec:
containers:
- name: hello
image: busybox
command: ["echo","test"]
restartPolicy: OnFailure
执行
kubectl apply -f cro-job.yml
cronjob.batch/test-job created
查看
kubectl get cronjob.batch/test-job
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
test-job */1 * * * * False 0 <none> 44s
查看pod
kubectl get pod
NAME READY STATUS RESTARTS AGE
test-job-1594958400-n7w57 0/1 Completed 0 60s
test-job-1594958460-2h6lg 0/1 ContainerCreating 0 0s
从时间可以看出,每隔一分钟创建一个pod