1. Kuberbetes调度
调度器通过kubernetes的watch机制来发现集群中新创建且尚未被调度到node上的pod。调度器会将发现的每一个未调度的pod调度到一个合适的node上来运行。
kube-scheduler 是 Kubernetes 集群的默认调度器,并且是集群 控制面的一部分。如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许你自己写一个调度组件并替换原有的
kube-scheduler。
在做调度决定时需要考虑的因素包括:单独和整体的资源请求、硬件/软件/策略限制、亲和以及反亲和要求、数据局域性、负载间的干扰等等。
Kuberbetes调度可以参考官网地址:
https://kubernetes.io/zh/docs/concepts/scheduling-eviction/kube-scheduler/
(1)nodename
nodename是节点选择约束的最简单方法,但是一般不推荐。
如果nodename在podspec中制定了,则它的优先级最高。
使用nodename来选择节点的一些限制:
1.如果指定的节点不存在
2.如果指定的节点没有资源来容纳pod,则pod调度失败
3.云环境中的节点名称并非总是可预测或稳定的
[kubeadm@server1 scheduler]$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: server3 绑定server3
只能调度在server3上:
(2)nodeSelector
nodeselector是节点选择约束的最简单推荐形式
给选择的节点添加标签:
kubectl label nodes <node name> disktype=ssd
[kubeadm@server1 scheduler]$ cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
给server3加标签
pod运行在server3上(因为标签)
(3)亲和与反亲和
节点亲和
简单的来说就是用一些标签规则来约束pod的调度
requiredDuringSchedulingIgnoredDuringExecution 必须满足
preferredDuringSchedulingIgnoredDuringExecution 倾向满足
ignoreDuringExecution 表示如果在pod运行期间node的标签发生变化,导致亲和性策略不能满足,则继续运行当前的pod
可以参考链接:
https://kubernetes.io/zh/docs/concepts/configuration/assign-pod-node
必须满足
:
[kubeadm@server1 scheduler]$ cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
由于标签关系,调度到server3上:
给server2加上标签
更改清单
[kubeadm@server1 scheduler]$ cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- sata
此时pod调度到server2的上面
倾向满足
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- server3
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
server3上有disktype=ssd
之前pod在server2上,此时不会调度到server3,因为是倾向满足
pod亲和与反亲和
pod亲和:
给pod/nginx加标签
[kubeadm@server1 diaodu]$ cat nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s/nginx
亲和nginx的标签
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: mysql
image: mysql:5.7
env:
- name: "MYSQL_ROOT_PASSWORD"
value: "westos"
pod反亲和
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
app: mysql
spec:
affinity:
podAntiAffinity: 区别
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: mysql
image: mysql:5.7
env:
- name: "MYSQL_ROOT_PASSWORD"
value: "westos"
(4)污点
污点作为节点上的标签拒绝运行调度pod,甚至驱逐pod
有污点就有容忍,容忍可以容忍节点上的污点,可以调度pod
kubectl taint nodes nodename key=value:NoSchedul 创建
kubectl describe nodes nodename | grep Taint 查询
kubectl taint nodes nodename key:NoSchedule- 删除
NoSchedule: pod不会调度到污点节点
PreferNoSchedule: NoSchedule 的软策略版本
NoExecute: 生效就会驱逐,除非pod内Tolerate设置对应
server1不参加调度的原因是因为有污点
创建一个控制器
[kubeadm@server2 scheduler]$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
server1没有参加调度
让server1参加调度的方法
删除server1污点,重新调度
一般情况下因为server1启动pod太多,不会参与调度(如果pod基数太大就会参与调度)
污点容忍
给server3打上标签
此时server3不参与调度
添加容忍标签
[kubeadm@server2 scheduler]$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: "key1"
operator: "Equal"
value: "v1"
effect: "NoSchedule"
此时server3参与调度
给server3添加驱逐标签,server3上运行的pod都调度到server2
在清单中改变容忍标签
effect: "NoExecute"
server3又可以参与调度
容忍所有污点
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 6
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
tolerations:
- operator: "Exists"
影响pod调度的指令还有: cordon,drain,delete,后期创建的pod都不会被调度到该节点上,但操作的暴力程度不一样。
cordon停止调度:
原来存在的pod依然会存在,新建pod不会再调度
恢复调度:
drain驱逐节点:
首先驱离节点上的pod,再在其它节点重建,最后将节点调为SchedulingDisabled
删除server2和server3的污点
drain 驱逐节点
恢复调度
delete删除节点:
最暴力的参数
从master节点删除node,master失去对该节点控制,也就是该节点上的pod,如果想要恢复,需要在该节点重启kubelet服务
删除:
在server3重启kubelet,集群上节点自动上线