一、nodeSelector:节点的选择是一种硬性的要求,主动性的选择。在不考虑污点的情况下,直接指定Pod部署到哪个节点中。判断的依据是通过node节点的标签来进行判断
示例
kubectl label nodes [节点名称] [标签名称]=[标签值]
apiVersion: v1
kind: Pod
metadata:
name: nodetest
labels:
app: nodetest
spec:
restartPolicy: Always
nodeSelector:
disktype: ssd
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 500m
memory: 64Mi
limits:
cpu: 600m
memory: 128Mi
二、nodeAffinity:亲和度,这里分硬亲和度和软亲和度。硬亲和度类似于nodeSelector一般,必须key和vvalue对的上才行。而软亲和度是尽可能的匹配。
示例
硬亲和度
apiVersion: v1
kind: Pod
metadata:
name: nodetest
labels:
app: nodetest
spec:
restartPolicy: Always
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 500m
memory: 64Mi
limits:
cpu: 600m
memory: 128Mi
软亲和度
apiVersion: v1
kind: Pod
metadata:
name: nodetest
labels:
app: nodetest
spec:
restartPolicy: Always
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
- ssh
- abc
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 500m
memory: 64Mi
limits:
cpu: 600m
memory: 128Mi
三、污点,我的理解是一种被动的标记。节点选择,亲和度和污点配合起来,更高效的实现pod的合理分布
设置污点
kubectl taint nodes [节点名称] key=value:[effect]
effect:NoSchedule 不允许调度、PreferNoSchedule 尽量不调度、NoExecute 不允许调度并且驱逐节点内的Pod
查看节点的污点
kubectl describe nodes | grep -i taint
污点如果设置到了node上。那么就会产生对应的效果。实际情况下,设置污点是为了更好的利用node资源。那么对应就有一个容忍度来做这个事情。容忍度规则内,那么污点是允许通过的。
示例
apiVersion: v1
kind: Pod
metadata:
name: nodetest
labels:
app: nodetest
spec:
restartPolicy: Always
tolerations:
- key: "disktype"
operator: "Exists"
effect: "NoSchedule"
#tolerations:
#- key: "disktype"
# operator: "Equal"
# value: "ssh"
# effect: "NoSchedule"
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
容忍度的选项有必须符合和包含。必须符合是必须key=value,包含存在的情况是存在这个key即可