一. 调度策略
二. 调度方法
nodeName
创建pod配置文件 vim nodename.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: k8s4 #找不到节点pod会出现pending回收: kubectl delete -f nodename.yaml
nodeSelector
创建pod配置文件 vim nodeselector.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssdkubectl label nodes k8s4 disktype=ssd 文件外命令方式打标签
#取消标签不会影响已经创建好的pod
取消标签命令: kubectl label nodes k8s4 disktype-
回收: kubectl delete -f nodeselector.yaml
三. 亲和与反亲和
Pod与节点之间的调度
创建pod配置文件 vim nodeaffinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution: #必须满足的
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In #以下列表
values:
- ssd
- fcpreferredDuringSchedulingIgnoredDuringExecution: #倾向满足的
- weight: 1 #权重
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn #不在列表内
values:
- k8s3 #尽量不去这个节点回收: kubectl delete -f nodeaffinity.yaml
Pod与Pod之间的调度
创建pod亲和性文件 vim podaffinity.yaml
apiVersion: apps/v1
kind: Deployment #控制器
metadata:
name: nginx-deployment #控制器名字
labels: #标签
app: nginx #名字
spec:
replicas: 3 #副本数量
selector:
matchLabels: #匹配标签
app: nginx #有这个标签的
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
affinity:
podAffinity: #pod亲和性
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"此时所有pod都在一个节点上
回收:kubectl delete -f podaffinity.yaml
创建pod反亲和性文件 vim podantiaffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"创建的pod无法在同一个节点
回收: kubectl delete -f podantiaffinity.yaml
四. 污点
1. 创建pod配置文件 vim taint.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx正常调度到3.4两个没有污点的节点
2. 开始设置taint污点 NoSchedule 原来运行的pod不受影响
3.更改taint污点类型 NoExecute 所有pod都会到没有污点的节点
回收:kubectl delete -f taint.yaml
设置 tolerations容忍
创建编辑文件 vim taint.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 6
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
tolerations: #容忍
- operator: Exists #存在的污点
effect: NoSchedule #污点类型 注释此选项及容忍所有污点
containers:
- image: nginx
name: nginx此时有NoSchedule污点的2节点也参与调度 因为3节点是NoExecute污点不参与
注释掉这个即可都参与调度
回收: kubectl delete -f taint.yaml
记得去除污点: kubectl taint node k8s3 k1-
cordon停止调度示例
kubectl create deployment demo --image nginx --replicas 3
kubectl cordon k8s3
kubectl get node
kubectl scale deployment demo --replicas 6
恢复: kubectl uncordon k8s3
drain驱离节点示例
kubectl drain k8s3 --ignore-daemonsets
delete删除节点
kubectl delete nodes k8s3
恢复 :k8s3节点重启kubelet服务重新加入集群