守护进程级,ds 资源;所匹配的节点上都部署一个pod
运行集群的daemon,比如ceph 与 glusterd
或者说CNI ,calico 插件
节点监控,node export
服务暴露,ingress-nginx
创建一个ds 资源:
[root@k8s-master01 ~]# kubectl get ds web -oyaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: web
namespace: default
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: web
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
可以根据给节点打上labels ,通过nodeSelecctor 来选择部署我们的ds 资源
kubectl label node k8s-node01 k8s-node02 ds=true
kubectl get node --show-labels
修改yml 资源文件,增加nodeSelector
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: web
namespace: default
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
ds: "true"
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
name: web
kubectl replace -f ds.yml
这样只会打上label ds=true 的节点才会部署ds 资源
查看回滚的记录版本:
[root@k8s-master01 ~]# kubectl rollout history ds web
daemonset.apps/web
REVISION CHANGE-CAUSE
1 <none>
2 <none>
DaemonSet 跟新及回滚策略:
RollingUpdate:自动删除完成跟新 maxUnAvailable:1 删除一个跟新一个,不会导致大面积出现跟新问题
OnDelete: 删除一个跟新一个,推荐