文章目录
说明
-
DaemonSet(守护进程集)和守护进程类似,它在符合匹配条件的节点上均部署一个Pod【这句话应该不难理解吧,比如我们现在2个node节点,ds是通过自定义标签创建的【看下面指定标签创建】,后面我们新增了1个node节点,定义好这个标签以后,那么node节点上会自动创建出该pod】
-
可以看看官网说明,还有模版可以参考哦
DaemonSet -
DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod 。 当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。
-
DaemonSet 的一些典型用法:
- 在每个节点上运行集群守护进程
- 在每个节点上运行日志收集守护进程
- 在每个节点上运行监控守护进程
-
应用场景【上面典型用法的详细说明】
- 1.运行集群存储 daemon,例如在每个 Node 上运行 glusterd、ceph
- 2.在每个 Node 上运行日志收集 daemon,例如fluentd、logstash
- 3.在每个 Node 上运行监控 daemon,例如 Prometheus Node Exporter、collectd、Datadog 代理、New Relic 代理,或 Ganglia gmond
-
一种简单的用法是为每种类型的守护进程在所有的节点上都启动一个 DaemonSet。 一个稍微复杂的用法是为同一种守护进程部署多个 DaemonSet;每个具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求。
环境准备
首先需要有一套集群
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 43d v1.21.0
node1 Ready <none> 43d v1.21.0
node2 Ready <none> 43d v1.21.0
[root@master ~]#
然后我们创建一个文件用来放后面的测试文件,创建一个命名空间,后面测试都在这个命名空间做
[root@master ~]# mkdir ds
[root@master ~]# cd ds
[root@master ds]# kubectl create ns ds
namespace/ds created
[root@master ds]# kubens ds
Context "context" modified.
Active namespace is "ds".
[root@master ds]#
[root@master ds]# kubectl get pods
No resources found in ds namespace.
[root@master ds]#
创建使用DaemonSet
我们能用最简单的方式学会DaemonSet,就不要去纠结那些复杂的使用方式了,所以我下面的方式呢,挺简单的,也是最基本的使用,先入门吧,后面想加深的时候再去扩展呗。
生成并编辑yaml文件
- ds是没有一个命令来直接创建的,但我们可以根据deploy的yaml来改
- 生成文件以后,YAML 文件中需要修改一处,删除3处操作:
- 把
Kind
类型修改为DaemonSet
; - 删除
replicas
; - 删除
strateyg: {}
; - 删除
status: {}
。
- 把
- 最终一个DaemonSet文件就有了,如下
【我增加一行删除策略和一行镜像获取策略】
[root@master ds]# kubectl create deployment ds1 --image=nginx --dry-run=client -o yaml > ds1.yaml
[root@master ds]# vim ds1.yaml
[root@master ds]# cat ds1.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
app: ds1
name: ds1
spec:
selector:
matchLabels:
app: ds1
template:
metadata:
creationTimestamp: null
labels:
app: ds1
spec:
terminationGracePeriodSeconds: 0
containers:
- image: nginx
name: nginx
imagePullPolicy: IfNotPresent
resources: {}
[root@master ds]#
- 上面呢是最简单的生成方式,我们也可以通过daemonset控制器来生成完整的yaml文件,但初学不建议弄这个代码折磨自己【以了解为主吧,感兴趣的可以去下面其他资料里看看】
[root@master ~]# kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-node 3 3 3 3 3 kubernetes.io/os=linux 62d
kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 62d
[root@master ~]# kubectl get ds kube-proxy -n kube-system -o yaml > ds2.yaml
[root@master ~]# cat ds2.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2021-07-02T01:36:48Z"
generation: 1
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: "7840286"
selfLink: /apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy
uid: 38a2cf1c-07c3-41ba-8a0e-3602ac0311c9
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /lib/modules
name: lib-modules
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kube-proxy
serviceAccountName: kube-proxy
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- operator: Exists
volumes:
- configMap:
defaultMode: 420
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- hostPath:
path: /lib/modules
type: ""
name: lib-modules
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberAvailable: 3
numberMisscheduled: 0
numberReady: 3
observedGeneration: 1
updatedNumberScheduled: 3
[root@master ~]#
不指定标签创建pod
- 正常情况,我们创建的pod,是会运行在所有node节点上的,master上不会创建【因为master上有污点】
- 但是,我创建的呢,node1有,node2没有,而且master节点也出现了一个【创建失败是因为master节点没有镜像】,原因是我之前测试的时候,污点被我删了,所以出现了这种情况。
[root@master ds]# kubectl apply -f ds1.yaml
daemonset.apps/ds1 created
[root@master ds]#
[root@master ds]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 2 2 1 2 1 <none> 5s
[root@master ds]#
[root@master ds]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ds1-jh4j5 1/1 Running 0 12s 10.244.166.136 node1 <none> <none>
ds1-qbz9m 0/1 ContainerCreating 0 12s <none> master <none> <none>
[root@master ds]#
[root@master ds]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ds1-jh4j5 1/1 Running 0 26s 10.244.166.136 node1 <none> <none>
ds1-qbz9m 0/1 ImagePullBackOff 0 26s 10.244.219.106 master <none> <none>
现在的标签情况如下
[root@master ds]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready master 52d v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
node1 Ready <none> 52d v1.21.0 aa=7,bb=2,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2 Ready <none> 52d v1.21.0 aa=1,bb=5,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ccx_label=ccxhero,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
[root@master ds]#
指定标签创建pod【建议】
- 其实不指定呢,默认也有个标签【默认标签是啥我不知道,也没去查】
- 指定标签的作用呢,就是只允许pod在有指定标签的node上运行。
- 前面说过,daemonset的好处呢,是会在所有满足条件的节点上自动生成pod,所以呢,我先一台node上创建一个标签,然后创建pod以后呢,再另外一台上创建该标签,看是否会自动生成pod
创建标签并修改yaml文件
- 先给node1创建一个自定义标签
[root@master ds]# kubectl label nodes node1 ccx=hero
node/node1 labeled
[root@master ds]# kubectl get nodes -l ccx=hero
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 52d v1.21.0
[root@master ds]#
- 修改配置文件
#语法就是在spec下面增加2行
spec :
nodeSelector: #一行这个
ccx: hero #格式就是这样,前面是标签名,后面是标签内容
[root@master ds]# cat ds1.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
app: ds1
name: ds1
spec:
selector:
matchLabels:
app: ds1
template:
metadata:
creationTimestamp: null
labels:
app: ds1
spec:
terminationGracePeriodSeconds: 0
nodeSelector:
ccx: hero
containers:
- image: nginx
name: nginx
imagePullPolicy: IfNotPresent
resources: {}
[root@master ds]#
创建pod并测试
正常情况呢,这下pod就会只运行在刚刚我们创建的node1上了。
[root@master ds]# kubectl apply -f ds1.yaml
daemonset.apps/ds1 created
[root@master ds]#
[root@master ds]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 1 1 1 1 1 ccx=hero 6s
[root@master ds]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ds1-4bp5b 1/1 Running 0 11s 10.244.166.135 node1 <none> <none>
[root@master ds]#
- 现在去node2增加我刚才创建的标签,然后看看pod是否会增加
[root@master ds]# kubectl label nodes node2 ccx=hero
node/node2 labeled
[root@master ds]#
[root@master ds]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 1 1 1 1 1 ccx=hero 94s
[root@master ds]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 1 1 1 1 1 ccx=hero 96s
[root@master ds]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 1 1 1 1 1 ccx=hero 102s
[root@master ds]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ds1-4bp5b 1/1 Running 0 110s 10.244.166.135 node1 <none> <none>
[root@master ds]#
- 额,我在node2上创建该标签以后呢,等了许久都没有自动更新,看来是我理解错了【错个毛,我的node2出问题了,不会被分配pod了,我在master上增加该标签, master立马就创建出该pod了】
[root@master ds]# kubectl apply -f ds1.yaml
daemonset.apps/ds1 configured
[root@master ds]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 1 1 1 1 1 ccx=hero 2m4s
[root@master ds]# kubectl label nodes master ccx=hero
node/master labeled
[root@master ds]#
[root@master ds]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 2 2 1 2 1 ccx=hero 2m15s
[root@master ds]#
问题排查,总结【必看】
我上面node2节点有问题,多次以node2的唯一标签创建都是空的,所以我就使用普通pod创建方式指定node2标签创建了一个pod,发现状态一直为pending
[root@master ~]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
#nodeName: node2
nodeSelector:
ccx_label: ccxhero
terminationGracePeriodSeconds: 0
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: pod1
resources: {}
env:
- name: aa
value: xxx
- name: bb
value: "888"
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
[root@master ~]# kubectl apply -f pod1.yaml
[root@master ~]# kubectl get pods | grep pod1
pod1 0/1 Pending 0 5m57s
后面发现tm是因为我node2之前设置污点了,不指定标签master创建成功是因为master上的污点之前被我删了,干,浪费我那么多时间排查问题。
[root@master ~]# kubectl describe nodes node1 | grep Taints
Taints: <none>
[root@master ~]# kubectl describe nodes node2 | grep Taints
Taints: xx=yy:NoSchedule
[root@master ~]# kubectl describe nodes master | grep Taints
Taints: <none>
[root@master ~]#
然后我现在将node2上面的这个污点给删了,node2上面就有ds定义标签的pod了,而且之前的pod1状态为pending也正常了。
[root@master ~]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 2 2 1 2 1 ccx=hero 24m
[root@master ~]#
[root@master ~]# kubectl taint nodes node2 xx2:NoSchedule-
node/node2 untainted
[root@master ~]# kubectl describe nodes node2 | grep Taints
Taints: <none>
[root@master ~]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds1 3 3 2 3 2 ccx=hero 24m
[root@master ~]#
[root@master ~]# kubectl get nodes -l ccx=hero
NAME STATUS ROLES AGE VERSION
master Ready master 52d v1.21.0
node1 Ready <none> 52d v1.21.0
node2 Ready <none> 52d v1.21.0
[root@master ~]#
[root@master ~]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ds1-2pbnp 0/1 ImagePullBackOff 0 22m 10.244.219.107 master <none> <none>
ds1-625n8 1/1 Running 0 39s 10.244.104.47 node2 <none> <none>
ds1-dqs7f 1/1 Running 0 22m 10.244.166.137 node1 <none> <none>
pod1 1/1 Running 0 13m 10.244.104.46 node2 <none> <none>
[root@master ~]#
这个问题浪费我一早上了,网上搜到一堆乱七八糟的解决方法全试了都不行,最后从头排查我的node2节点,才发现是有污点,tm的,气死个人。
其他常用命令
1. 命令式更新
kubectl edit ds/<daemonset-name>
kubectl patch ds/<daemonset-name> -p=<strategic-merge-patch>
2. 更新镜像
kubectl set image ds/<daemonset-name><container-name>=<container-new-image>--record=true
3. 查看更新状态
kubectl rollout status ds/<daemonset-name>
4. 列出所有修订版本
kubectl rollout history daemonset <daemonset-name>
5. 回滚到指定revision
kubectl rollout undo daemonset <daemonset-name> --to-revision=<revision>
#DaemonSet的更新和回滚与Deployment类似,此处不再演示。
其他资料
上面的呢,就是最基本的使用方式了,不过我觉得常规使用也够了,更深的呢,我们可以参考其他大佬写的内容
Kubernetes DaemonSet使用详解
ReplicationController (RC) 控制器
这个是扩展,所以下面简单说说。
配置文件编辑
[root@master ds]# cat rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myrc
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 0
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
[root@master ds]#
测试
[root@master ds]# kubectl apply -f rc.yaml
replicationcontroller/myrc created
[root@master ds]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myrc-lhcdn 1/1 Running 0 22s
myrc-m7c52 1/1 Running 0 22s
myrc-wq8ch 1/1 Running 0 22s
pod1 1/1 Running 0 85m
[root@master ds]# kubectl delete pod pod1
pod "pod1" deleted
[root@master ds]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myrc-lhcdn 1/1 Running 0 43s 10.244.104.48 node2 <none> <none>
myrc-m7c52 1/1 Running 0 43s 10.244.104.49 node2 <none> <none>
myrc-wq8ch 1/1 Running 0 43s 10.244.104.50 node2 <none> <none>
[root@master ds]# kubectl scale rc myrc --replicas=5
replicationcontroller/myrc scaled
[root@master ds]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myrc-kc7x4 1/1 Running 0 4s 10.244.104.51 node2 <none> <none>
myrc-lhcdn 1/1 Running 0 67s 10.244.104.48 node2 <none> <none>
myrc-m7c52 1/1 Running 0 67s 10.244.104.49 node2 <none> <none>
myrc-tddwk 1/1 Running 0 4s 10.244.104.52 node2 <none> <none>
myrc-wq8ch 1/1 Running 0 67s 10.244.104.50 node2 <none> <none>
[root@master ds]#
[root@master ds]# kubectl delete rc myrc
replicationcontroller "myrc" deleted
[root@master ds]#
ReplicaSet(rs)控制器
这个是扩展,所以下面简单说说。
配置文件编辑
[root@master ds]# cat rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myrs
labels:
app: guestbook
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
terminationGracePeriodSeconds: 0
containers:
- name: nginx
imagePullPolicy: IfNotPresent
image: nginx
[root@master ds]#
测试
[root@master ds]# kubectl apply -f rs.yaml
replicaset.apps/myrs created
[root@master ds]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myrs-4p7xp 1/1 Running 0 21s
myrs-7lb5d 1/1 Running 0 21s
myrs-lvbtt 1/1 Running 0 21s
[root@master ds]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myrs-4p7xp 1/1 Running 0 26s 10.244.104.54 node2 <none> <none>
myrs-7lb5d 1/1 Running 0 26s 10.244.104.55 node2 <none> <none>
myrs-lvbtt 1/1 Running 0 26s 10.244.104.53 node2 <none> <none>
[root@master ds]# kubectl scale rs myrs --replicas=5
replicaset.apps/myrs scaled
[root@master ds]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myrs-4p7xp 1/1 Running 0 42s 10.244.104.54 node2 <none> <none>
myrs-7lb5d 1/1 Running 0 42s 10.244.104.55 node2 <none> <none>
myrs-9t5tb 1/1 Running 0 3s 10.244.104.56 node2 <none> <none>
myrs-d2ndv 1/1 Running 0 3s 10.244.104.57 node2 <none> <none>
myrs-lvbtt 1/1 Running 0 42s 10.244.104.53 node2 <none> <none>
[root@master ds]#
[root@master ds]# kubectl delete rs myrs
replicaset.apps "myrs" deleted
[root@master ds]#
扩展说明
在前面deployment中说过,deploy其实是调用rs的,那么我们现在测试一下,在没有rs的情况创建一个deployment,看是否会自动生成rs哦
[root@master deploy]# cat web1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: web1
name: web1
spec:
replicas: 1
selector:
matchLabels:
app1: web1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app1: web1
app2: web2
spec:
terminationGracePeriodSeconds: 0
containers:
- image: nginx
name: nginx
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 400m
status: {}
[root@master deploy]#
[root@master deploy]# kubectl get rs
No resources found in ds namespace.
[root@master deploy]#
[root@master deploy]# kubectl apply -f web1.yaml
deployment.apps/web1 created
[root@master deploy]#
[root@master deploy]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web1 1/1 1 1 8s
[root@master deploy]#
[root@master deploy]# kubectl get rs
NAME DESIRED CURRENT READY AGE
web1-ffffb5589 1 1 1 15s
[root@master deploy]#
[root@master deploy]#
[root@master deploy]# kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web1-ffffb5589-c97x5 1/1 Running 0 25s 10.244.104.58 node2 <none> <none>
[root@master deploy]#