kubernetes-调度
k8s调度概述
• 调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到
Node 上的 Pod。调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node
上来运行。
• kube-scheduler 是 Kubernetes 集群的默认调度器,并且是集群控制面的一部分。
如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许你自己写
一个调度组件并替换原有的 kube-scheduler。
• 在做调度决定时需要考虑的因素包括:单独和整体的资源请求、硬件/软件/策略限
制、亲和以及反亲和要求、数据局域性、负载间的干扰等等。
• 默认策略可以参考:
https://kubernetes.io/zh/docs/concepts/scheduling/kube-scheduler/
• 调度框架:
https://kubernetes.io/zh/docs/concepts/configuration/scheduling-
framework/
nodename节点选择约束
• nodeName 是节点选择约束的最简单方法,但一般不推荐。如果 nodeName 在
PodSpec 中指定了,则它优先于其他的节点选择方法。
• 使用 nodeName 来选择节点的一些限制:
• 如果指定的节点不存在。
• 如果指定的节点没有资源来容纳 pod,则pod 调度失败。
• 云环境中的节点名称并非总是可预测或稳定的。
[root@server2 ~]# mkdir schedu
[root@server2 ~]# cd schedu/
[root@server2 schedu]# vim pod1.yml
[root@server2 schedu]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: server2
[root@server2 schedu]# kubectl apply -f pod1.yml
pod/nginx created
[root@server2 schedu]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 13s
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 5m38s 10.244.0.20 server2 <none> <none>
nodeSelector节点选择约束
拉起容器查看 处于pending状态 是因为未识别到指定标签
[root@server2 schedu]# kubectl delete -f pod1.yml
pod "nginx" deleted
[root@server2 schedu]# vim pod2.yaml
[root@server2 schedu]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
[root@server2 schedu]# kubectl apply -f pod2.yaml
pod/nginx created
[root@server2 schedu]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 0/1 Pending 0 8s
给server3添加标签 容器running
[root@server2 schedu]# kubectl label nodes server3 disktype=ssd
node/server2 labeled
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 7m 10.244.5.103 server3 <none> <none>
查看server3标签 disktype=ssd
[root@server2 schedu]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
server2 Ready control-plane,master 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
server3 Ready <none> 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
server4 Ready <none> 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux
亲和与反亲和
亲和与反亲和
• nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的
节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。
• 你可以发现规则是“软”/“偏好”,而不是硬性要求,因此,如果调度器无
法满足该要求,仍然调度该 pod
• 你可以使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允
许哪些 pod 可以或者不可以被放置在一起。
节点亲和
• requiredDuringSchedulingIgnoredDuringExecution 必须满足
• preferredDuringSchedulingIgnoredDuringExecution 倾向满足
• IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,
导致亲和性策略不能满足,则继续运行当前的Pod。
• 参考:https://kubernetes.io/zh/docs/concepts/configuration/assign-
pod-node/
节点反亲和
指定pod node-affinity不在server4上部署
[root@server2 schedu]# kubectl delete -f pod2.yaml
pod "nginx" deleted
[root@server2 schedu]# vim pod3.yaml
[root@server2 schedu]# cat pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- server4
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
[root@server2 schedu]# kubectl apply -f pod3.yaml
pod/node-affinity created
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-affinity 1/1 Running 0 16s 10.244.5.104 server3 <none> <none>
部署在server3
节点亲和
指定pod node-affinity在server4上部署
[root@server2 schedu]# kubectl delete -f pod3.yaml
pod "node-affinity" deleted
[root@server2 schedu]# vim pod3.yaml
15 operator: In
[root@server2 schedu]# kubectl apply -f pod3.yaml
pod/node-affinity created
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-affinity 1/1 Running 0 3s 10.244.6.30 server4 <none> <none>
pod亲和与反亲和
• podAffinity 主要解决POD可以和哪些POD部署在同一个拓扑域中的问题
(拓扑域用主机标签实现,可以是单个主机,也可以是多个主机组成的
cluster、zone等。)
• podAntiAffinity主要解决POD不能和哪些POD部署在同一个拓扑域中的问
题。它们处理的是Kubernetes集群内部POD和POD之间的关系。
• Pod 间亲和与反亲和在与更高级别的集合(例如 ReplicaSets,
StatefulSets,Deployments 等)一起使用时,它们可能更加有用。可以
轻松配置一组应位于相同定义拓扑(例如,节点)中的工作负载。
pod亲和
设置nginx与mysql为pod亲和,即二者需要部署在同一节点上
[root@server2 schedu]# kubectl delete -f pod3.yaml
pod "node-affinity" deleted
[root@server2 schedu]# vim pod4.yaml
[root@server2 schedu]# cat pod4.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
env:
- name: "MYSQL_ROOT_PASSWORD"
value: "westos"
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
[root@server2 schedu]# kubectl apply -f pod4.yaml
pod/nginx created
pod/mysql created
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql 1/1 Running 0 26s 10.244.5.106 server3 <none> <none>
nginx 1/1 Running 0 26s 10.244.5.105 server3 <none> <none>
pod反亲和
设置nginx与mysql为pod反亲和,即二者需要部署在不同节点上
[root@server2 schedu]# kubectl delete -f pod4.yaml
pod "nginx" deleted
pod "mysql" deleted
[root@server2 schedu]# vim pod4.yaml
26 podAntiAffinity:
[root@server2 schedu]# kubectl apply -f pod4.yaml
pod/nginx created
pod/mysql created
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql 1/1 Running 0 2s <none> server4 10.244.5.108 <none>
nginx 1/1 Running 0 2s 10.244.5.107 server3 <none> <none>
Taints(污点)
• NodeAffinity节点亲和性,是Pod上定义的一种属性,使Pod能够按我们的要求调度
到某个Node上,而Taints则恰恰相反,它可以让Node拒绝运行Pod,甚至驱逐Pod。
• Taints(污点)是Node的一个属性,设置了Taints后,所以Kubernetes是不会将Pod
调度到这个Node上的,于是Kubernetes就给Pod设置了个属性Tolerations(容忍),
只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能
够(不是必须)把Pod调度过去。
Nodename可以无视任何污点
[root@server2 schedu]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: server2
[root@server2 schedu]# kubectl apply -f pod1.yml
pod/nginx created
server2的污点
[root@server2 schedu]# kubectl describe nodes server2 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
已经部署到server2上了,说明nodename可以掩盖污点
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 41s 10.244.0.21 server2 <none> <none>
标签方式选择node
[root@server2 schedu]# kubectl delete -f pod1.yaml
pod "nginx" deleted
[root@server2 schedu]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
roles: master
[root@server2 schedu]# kubectl apply -f pod2.yaml
pod/nginx created
[root@server2 schedu]# kubectl label nodes server2 roles=master
node/server2 labeled
查看server2上的标签roles=master
[root@server2 schedu]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
server2 Ready control-plane,master 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=,roles=master
server3 Ready <none> 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
server4 Ready <none> 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux
查看pod 处于pending状态,说明污点优先级高于标签选择
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 Pending 0 73s <none> <none> <none> <none>
删除标签
[root@server2 schedu]# kubectl label nodes server2 roles-
node/server2 not labeled
[root@server2 schedu]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
server2 Ready control-plane,master 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
server3 Ready <none> 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux
server4 Ready <none> 11d v1.21.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux
容忍标签
tolerations中定义的key、value、effect,要与node上设置的taint保持一直:
• 如果 operator 是 Exists ,value可以省略。
• 如果 operator 是 Equal ,则key与value之间的关系必须相等。
• 如果不指定operator属性,则默认值为Equal。
还有两个特殊值:
• 当不指定key,再配合Exists 就能匹配所有的key与value ,可以容忍所有污点。
• 当不指定effect ,则匹配所有的effect。
NoSchedule
设置sevrer3 taint key=value:NoSchedule
[root@server2 schedu]# kubectl taint nodes server3 key=value:NoSchedule
node/server3 tainted
[root@server2 schedu]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
[root@server2 schedu]# kubectl apply -f pod2.yaml
pod/nginx created
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 7s 10.244.6.32 server4 <none> <none>
server3上有污点 部署在server4
为server3加标签roles=master
[root@server2 schedu]# vim pod2.yaml
[root@server2 schedu]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
roles: master
#tolerations:
#- operator: "Exists"
# effect: "NoSchedule"
[root@server2 schedu]# kubectl label nodes server3 roles=master
node/server3 labeled
查看pod pending
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 Pending 0 37s <none> server3 <none> <none>
为pod添加容忍 running
[root@server2 schedu]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
roles: master
tolerations:
- operator: "Exists"
effect: "NoSchedule"
[root@server2 schedu]# kubectl apply -f pod2.yaml
pod/nginx configured
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 57s 10.244.5.108 server3 <none> <none>
NoExecute
删除污点
[root@server2 schedu]# kubectl taint nodes server3 key:NoSchedule-
node/server3 untainted
添加容器
[root@server2 schedu]# kubectl apply -f deployment.yaml
deployment.apps/nginx-deployment created
deployment.apps/myapp-deployment created
[root@server2 pod]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: myapp:v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v2
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deployment-67f8c948cf-6mzm5 1/1 Running 0 42s 10.244.5.113 server3 <none> <none>
myapp-deployment-67f8c948cf-bg7fl 1/1 Running 0 42s 10.244.5.110 server3 <none> <none>
myapp-deployment-67f8c948cf-s7m5b 1/1 Running 0 42s 10.244.5.111 server3 <none> <none>
mychart-6675bd6ffd-dh8hs 1/1 Running 1 22h 10.244.5.101 server3 <none> <none>
nginx 1/1 Running 0 9m12s 10.244.5.109 server3 <none> <none>
nginx-deployment-6456d7c676-h88rx 1/1 Running 0 42s 10.244.5.114 server3 <none> <none>
nginx-deployment-6456d7c676-ktkdv 1/1 Running 0 42s 10.244.5.115 server3 <none> <none>
nginx-deployment-6456d7c676-w5gkt 1/1 Running 0 42s 10.244.5.112 server3 <none> <none>
看到容器全在server3上 给server3加污点
[root@server2 schedu]# kubectl taint node server3 key1=v1:NoExecute
node/server3 tainted
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
[root@server2 schedu]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deployment-67f8c948cf-pn8g5 0/1 ContainerCreating 0 24s <none> server4 <none> <none>
myapp-deployment-67f8c948cf-vbbsr 0/1 ContainerCreating 0 20s <none> server4 <none> <none>
myapp-deployment-67f8c948cf-xkp99 0/1 ContainerCreating 0 24s <none> server4 <none> <none>
mychart-6675bd6ffd-hcs56 0/1 ContainerCreating 0 24s <none> server4 <none> <none>
nginx-deployment-6456d7c676-6fsgx 0/1 ContainerCreating 0 24s <none> server4 <none> <none>
nginx-deployment-6456d7c676-dg6fx 0/1 ContainerCreating 0 22s <none> server4 <none> <none>
nginx-deployment-6456d7c676-ppz59 0/1 ContainerCreating 0 24s <none> server4 <none> <none>
看到容器被驱逐server4上
添加容忍标签之后server3 又可以运行容器了
[root@server2 schedu]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: server3
tolerations:
- key: "key1"
operator: "Equal"
value: "v1"
effect: "NoExecute"
nginx 1/1 Running 0 2s 10.244.5.116 server3 <none> <none>
cordon、drain、delete
• 影响Pod调度的指令还有:cordon、drain、delete,后期创建的pod都不会被调度
到该节点上,但操作的暴力程度不一样。
cordon 停止调度
影响最小,只会将node调为SchedulingDisabled,新创建pod,不会被调度到该节点,节点原有pod不受影响,仍正常对外提供服务。
[root@server2 schedu]# kubectl cordon server3
node/server3 cordoned
[root@server2 schedu]# kubectl get node
NAME STATUS ROLES AGE VERSION
server2 Ready control-plane,master 11d v1.21.3
server3 Ready,SchedulingDisabled <none> 11d v1.21.3
server4 Ready <none> 11d v1.21.3
恢复
[root@server2 schedu]# kubectl uncordon server3
node/server3 uncordoned
[root@server2 schedu]# kubectl get node
NAME STATUS ROLES AGE VERSION
server2 Ready control-plane,master 11d v1.21.3
server3 Ready <none> 11d v1.21.3
server4 Ready <none> 11d v1.21.3
drain 驱逐节点
首先驱逐node上的pod,在其他节点重新创建,然后将节点调为SchedulingDisabled。
[root@server2 schedu]# kubectl drain server3
node/server3 cordoned
evicting pod "web-1"
evicting pod "coredns-9d85f5447-mgg2k"
pod/coredns-9d85f5447-mgg2k evicted
pod/web-1 evicted
node/server3 evicted
恢复
[root@server2 schedu]# kubectl uncordon server3
node/server3 uncordoned
delete 删除节点
最暴力的一个,首先驱逐node上的pod,在其他节点重新创建,然后,从master节点删除该node,master失去对其控制,如要恢复调度,需进入node节点,重启kubelet服务
kubectl delete node server3
systemctl restart kubelet //基于node的自注册功能,恢复使用