守护进程服务DaemonSet、HPA自动扩缩容、Label&Selector
1、DaemonSet的使用
DaemonSet:守护进程集,缩写为ds,在所有节点或者是匹配的节点上都部署一个Pod。
使用DaemonSet的场景
- 运行集群存储的daemon,比如ceph或者glusterd
- 节点的CNI网络插件,calico
- 节点日志的收集:fluentd或者是filebeat
- 节点的监控:node exporter
- 服务暴露:部署一个ingress nginx
2、DaemonSet的配置
[root@k8s-master ~]# cp nginx-deplo.yaml nginx-ds.yaml
[root@k8s-master ~]# vim nginx-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: nginx
name: nginx
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.21.6
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
3、新建DaemonSet
- 创建一个ds,因为没有配置notselect,所有它会在每个节点启动一个
[root@k8s-master ~]# kubectl create -f nginx-ds.yaml
daemonset.apps/nginx created
#查看安装分布
[root@k8s-master ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready <none> 11d v1.22.0-beta.1 172.16.55.220 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
k8s-node01 Ready <none> 11d v1.22.0-beta.1 172.16.55.221 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
k8s-node02 Ready <none> 11d v1.22.0-beta.1 172.16.55.222 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
k8s-node03 Ready <none> 11d v1.22.0-beta.1 172.16.55.223 <none> CentOS Linux 7 (Core) 4.19.12-1.el7.elrepo.x86_64 docker://19.3.15
[root@k8s-master ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 25 (15m ago) 25h 192.161.125.10 k8s-node01 <none> <none>
nginx-ctgvv 1/1 Running 0 170m 192.169.214.227 k8s-node03 <none> <none>
nginx-km7q5 1/1 Running 0 170m 192.171.14.205 k8s-node02 <none> <none>
nginx-nwqf2 1/1 Running 0 170m 192.161.125.14 k8s-node01 <none> <none>
nginx-nx2ts 1/1 Running 0 170m 192.172.82.208 k8s-master <none> <none>
- 给需要部署的容器打标签
[root@k8s-master ~]# kubectl label node k8s-node02 k8s-node01 k8s-node03 ds=true
node/k8s-node02 labeled
node/k8s-node01 labeled
node/k8s-node03 labeled
[root@k8s-master ~]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready <none> 11d v1.22.0-beta.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,k8s.kuboard.cn/role=etcd,kubernetes.io/arc + 89 more...
k8s-node01 Ready <none> 11d v1.22.0-beta.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernete + 72 more...
k8s-node02 Ready <none> 11d v1.22.0-beta.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernete + 72 more...
k8s-node03 Ready <none> 11d v1.22.0-beta.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernete + 72 more...
- 修改nginx-ds.yaml
nodeSelector:
ds: "true"
[root@k8s-master ~]# vim nginx-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: nginx
name: nginx
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
nodeSelector:
ds: "true"
containers:
- image: nginx:1.21.6
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
- 更新配置
[root@k8s-master ~]# kubectl replace -f nginx-ds.yaml
daemonset.apps/nginx replaced
[root@k8s-master ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 25 (56m ago) 25h 192.161.125.10 k8s-node01 <none> <none>
nginx-j8d6c 1/1 Running 0 10s 192.161.125.15 k8s-node01 <none> <none>
nginx-km7q5 0/1 Terminating 0 3h31m 192.171.14.205 k8s-node02 <none> <none>
nginx-nn6sz 1/1 Running 0 6s 192.169.214.228 k8s-node03 <none> <none>
- 查看pod,可以看到不符合标签的pod被删除了
- 如果要添加节点,就在节点添加 label ds=true
4、DaemonSet的更新和回滚
- 配置信息
[root@k8s-master ~]# kubectl get ds nginx -oyaml
#RollingUpdate 更新回滚和 Deployment 一致
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
#更新版本
[root@k8s-master ~]# kubectl set image ds nginx nginx=nginx:1.20.1 --record
Flag --record has been deprecated, --record will be removed in the future
daemonset.apps/nginx image updated
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 26 (22m ago) 26h 192.161.125.10 k8s-node01 <none> <none>
nginx-5ch2g 0/1 Terminating 0 109s 192.171.14.208 k8s-node02 <none> <none>
nginx-nn6sz 0/1 Terminating 0 26m <none> k8s-node03 <none> <none>
nginx-vgkl2 0/1 Terminating 0 104s 192.161.125.17 k8s-node01 <none> <none>
- 更新策略推荐使用 OnDelete
updateStrategy:
type: OnDelete
#更改版本
[root@k8s-master ~]# kubectl set image ds nginx nginx=nginx:1.21.6 --record
#删除节点
[root@k8s-master ~]# kubectl delete po nginx-5ch2g
#查看更新记录
[root@k8s-master ~]# kubectl rollout history ds nginx
- 因为 DaemonSet 可能部署在 k8s 集群的很多节点上,一开始先在一些节点上进行测试,删除后触发更新不影响其他节点
5、HPA自动扩缩容
参考:https://blog.csdn.net/m0_47288926/article/details/122819880
- HPA全称是Horizontal Pod Autoscaler,中文意思是POD水平自动伸缩.
- 可以基于 CPU 利用率自动扩缩 ReplicationController、Deployment、ReplicaSet 和
StatefulSet 中的 Pod 数量。 - 除了 CPU 利用率,内存占用外,也可以基于其他应程序提供的自定义度量指标来执行自动扩缩。
- Pod 自动扩缩不适用于无法扩缩的对象,比如 DaemonSet。
- Pod 水平自动扩缩特性由 Kubernetes API 资源和控制器实现。资源决定了控制器的行为。
- 控制器会周期性的调整副本控制器或 Deployment 中的副本数量,以使得 Pod 的平均 CPU 利用率与用户所设定的目标值匹配。
- 确认安装metrics-server
[root@k8s-master ~]# kubectl get pods -n kube-system |grep metrics-server
metrics-server-64c6c494dc-m44ck 1/1 Running 9 (30h ago) 11d
[root@k8s-master ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 694m 8% 1216Mi 15%
k8s-node01 319m 3% 700Mi 8%
k8s-node02 357m 4% 984Mi 12%
k8s-node03 185m 2% 685Mi 8%
- kubectl对HPA的支持
- 与其他 API 资源类似,kubectl 以标准方式支持 HPA。
- 通过 kubectl create 命令创建一个 HPA 对象
- 通过 kubectl get hpa 命令来获取所有 HPA 对象
- 通过 kubectl describe hpa 命令来查看 HPA 对象的详细信息
- 通过 kubectl delete hpa 命令删除对象。
- 此外,还有个简便的命令 kubectl autoscale 来创建 HPA 对象。
例如,命令 kubectl autoscale deploy nginx --cpu-percont=20% --min=2 --max=5 将deploy nginx 创建一个 HPA 对象, 目标 CPU 使用率为 20%,副本数量配置为 2 到 5 之间。
[root@k8s-master ~]# kubectl autoscale deploy nginx --cpu-percent=20 --min=2 --max=5
horizontalpodautoscaler.autoscaling/nginx autoscaled
[root@k8s-master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx Deployment/nginx 0%/20% 2 5 2 8h
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
nginx ClusterIP 10.103.207.95 <none> 80/TCP 26m
#创建一个循环请求
[root@k8s-node01 ~]# while true;do wget -q -o- http://10.103.207.95 > /dev/null;done
#当cpu使用 超过20%
[root@k8s-master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx Deployment/nginx 30%/20% 2 5 5 9h
#自动创建最多5个pod
[root@k8s-master ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 37 (51m ago) 37h
nginx-6b5dc8646d-c8x4d 1/1 Running 0 15s
nginx-6b5dc8646d-dnprw 1/1 Running 0 16m
nginx-6b5dc8646d-w59mb 1/1 Running 0 42m
nginx-6b5dc8646d-w76cd 1/1 Running 0 30s
nginx-6b5dc8646d-x2ks4 1/1 Running 0 30s
- 当停止请求后,pod数量 下降为最少2个
6、Label&Selector
-
Label:对k8s中各种资源进行分类、分组,添加一个具有特别属性的一个标签
-
Selector:通过一个过滤的语法进行查找到对应标签的资源
-
当Kubernetes对系统的任何API对象如Pod和节点进行“分组”时,会对其添加Label(key=value形式的“键-值对”)用以精准地选择对应的API对象。而Selector(标签选择器)则是针对匹配对象的查询方法。注:键-值对就是key-value
pair -
例如,常用的标签tier可用于区分容器的属性,如frontend、backend;或者一个release_track用于区分容器的环境,如canary、production等
-
定义 Label
[root@k8s-master ~]# kubectl label node k8s-node02 region=subnet7
node/k8s-node02 labeled
#筛选
[root@k8s-master ~]# kubectl get no -l region=subnet7
NAME STATUS ROLES AGE VERSION
k8s-node02 Ready <none> 11d v1.22.0-beta.1
- 给busybox添加标签
[root@k8s-master ~]# kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
busybox 1/1 Running 39 (11m ago) 39h <none>
nginx-6b5dc8646d-dnprw 1/1 Running 0 96m app=nginx,pod-template-hash=6b5dc8646d
nginx-6b5dc8646d-w59mb 1/1 Running 0 122m app=nginx,pod-template-hash=6b5dc8646d
[root@k8s-master ~]# kubectl label po busybox app=busybox
pod/busybox labeled
[root@k8s-master ~]# kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
busybox 1/1 Running 40 (15s ago) 40h app=busybox
nginx-6b5dc8646d-dnprw 1/1 Running 0 145m app=nginx,pod-template-hash=6b5dc8646d
nginx-6b5dc8646d-w59mb 1/1 Running 0 171m app=nginx,pod-template-hash=6b5dc8646d
- 删除标签
[root@k8s-master ~]# kubectl label po busybox app-
pod/busybox labeled
[root@k8s-master ~]# kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
busybox 1/1 Running 40 (5m19s ago) 40h <none>
nginx-6b5dc8646d-dnprw 1/1 Running 0 150m app=nginx,pod-template-hash=6b5dc8646d
nginx-6b5dc8646d-w59mb 1/1 Running 0 176m app=nginx,pod-template-hash=6b5dc8646d
- 修改标签 参数–overwrite
[root@k8s-master ~]# kubectl label po busybox app=busybox
pod/busybox labeled
[root@k8s-master ~]# kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
busybox 1/1 Running 40 (7m11s ago) 40h app=busybox
nginx-6b5dc8646d-dnprw 1/1 Running 0 152m app=nginx,pod-template-hash=6b5dc8646d
nginx-6b5dc8646d-w59mb 1/1 Running 0 178m app=nginx,pod-template-hash=6b5dc8646d
[root@k8s-master ~]# kubectl label po busybox app=busybox2 --overwrite
pod/busybox labeled
您在 /var/spool/mail/root 中有新邮件
[root@k8s-master ~]# kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
busybox 1/1 Running 40 (10m ago) 40h app=busybox2
nginx-6b5dc8646d-dnprw 1/1 Running 0 155m app=nginx,pod-template-hash=6b5dc8646d
nginx-6b5dc8646d-w59mb 1/1 Running 0 3h1m app=nginx,pod-template-hash=6b5dc8646d
- 多条件组合查询
[root@k8s-master ~]# kubectl get po -A --show-labels
NAMESPACE NAME READY STATUS RESTARTS AGE LABELS
default busybox 1/1 Running 41 (37m ago) 41h app=busybox2
default nginx-6b5dc8646d-dnprw 1/1 Running 0 4h2m app=nginx,pod-template-hash=6b5dc8646d
default nginx-6b5dc8646d-w59mb 1/1 Running 0 4h28m app=nginx,pod-template-hash=6b5dc8646d
kube-system calico-kube-controllers-cdd5755b9-6vnrl 1/1 Running 2 (44h ago) 11d k8s-app=calico-kube-controllers,pod-template-hash=cdd5755b9
kube-system calico-node-g75v2 1/1 Running 2 (44h ago) 11d controller-revision-hash=6d457f564d,k8s-app=calico-node,pod-template-generation=1
kube-system calico-node-lbp8q 1/1 Running 2 (44h ago) 11d controller-revision-hash=6d457f564d,k8s-app=calico-node,pod-template-generation=1
kube-system calico-node-rjzg4 1/1 Running 2 (44h ago) 11d controller-revision-hash=6d457f564d,k8s-app=calico-node,pod-template-generation=1
kube-system calico-node-vwhj4 1/1 Running 2 (44h ago) 11d controller-revision-hash=6d457f564d,k8s-app=calico-node,pod-template-generation=1
kube-system coredns-fb4874468-mqvbc 1/1 Running 2 (44h ago) 11d k8s-app=kube-dns,pod-template-hash=fb4874468
kube-system metrics-server-64c6c494dc-m44ck 1/1 Running 9 (42h ago) 11d k8s-app=metrics-server,pod-template-hash=64c6c494dc
kubernetes-dashboard dashboard-metrics-scraper-7b4bbf8954-q6l2q 1/1 Running 2 (44h ago) 11d k8s-app=dashboard-metrics-scraper,pod-template-hash=7b4bbf8954
kubernetes-dashboard kubernetes-dashboard-6c65b776bd-5rjh6 1/1 Running 3 (44h ago) 11d k8s-app=kubernetes-dashboard,pod-template-hash=6c65b776bd
kuboard kuboard-agent-2-5585ff7c77-cg4hb 1/1 Running 12 (42h ago) 2d4h k8s.kuboard.cn/name=kuboard-agent-2,pod-template-hash=5585ff7c77
kuboard kuboard-agent-854f795645-ztwwh 1/1 Running 12 (42h ago) 2d4h k8s.kuboard.cn/name=kuboard-agent,pod-template-hash=854f795645
kuboard kuboard-etcd-5szk9 1/1 Running 2 (44h ago) 2d4h controller-revision-hash=588db57655,k8s.kuboard.cn/name=kuboard-etcd,pod-template-generation=1
kuboard kuboard-questdb-bdcfb4895-qdppt 1/1 Running 2 (44h ago) 2d4h k8s.kuboard.cn/name=kuboard-questdb,pod-template-hash=bdcfb4895
kuboard kuboard-v3-5fc46b5557-9qn66 1/1 Running 10 (42h ago) 2d4h k8s.kuboard.cn/name=kuboard-v3,pod-template-hash=5fc46b5557
#组合查询 条件 expected: in, notin, =, ==, !=, gt, lt
[root@k8s-master ~]# kubectl get po -A -l 'k8s-app in(metrics-server,kube-dns)'
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb4874468-mqvbc 1/1 Running 2 (44h ago) 11d
kube-system metrics-server-64c6c494dc-m44ck 1/1 Running 9 (42h ago) 11d