一、亲和和反亲和
pod调度
- nodeselector
- 亲和和反亲和
- nodename:指定节点
- pod拓扑分布
节点亲和性示例
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
#硬性要求,必须满足
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
#operator包括In、NotIn、Exists、DoesNotExist
#NotIn\DoesNotExist可以用于反亲和
operator: In
values:
- antarctica-east1
- antarctica-west1
#软要求,最好满足
preferredDuringSchedulingIgnoredDuringExecution:
#范围:1~100
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: registry.k8s.io/pause:2.0
pod间亲和和反亲和
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
#pod间亲和
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
topologyKey: topology.kubernetes.io/zone
podAntiAffinity:
#pod间反亲和
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
containers:
- name: with-pod-affinity
image: registry.k8s.io/pause:2.0
pod反亲和示例:也就是pod不亲和本身自带的标签
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cache
spec:
selector:
matchLabels:
app: store
replicas: 3
template:
metadata:
labels:
app: store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: redis-server
image: redis:3.2-alpine
二、污点与容忍
给node1增加一个污点
kubectl taint nodes node1 key1=value1:NoSchedule
移除这个污点
kubectl taint nodes node1 key1=value1:NoSchedule-
#污点由key、value、效果组成
容忍示例
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
tolerations:
#只要key包含NoSchedule就可以
- key: "example-key"
operator: "Exists"
effect: "NoSchedule"
tolerations:
#默认为Equal,需要配置value,并且相同
- key: "example-key"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
特殊情况:
如果key为空,则与所有的key、value、effect都匹配,可以容忍任何污点
如果effect为空,则与所有键名相同的都匹配
容忍必须要包含节点上的所有污点,才会调度到此节点,如果有一个污点没有被容忍,则不会调度到该节点
使用场景
专用节点:将节点分配给特定的服务,给此节点打上污点,和污点相同的Key、value的label,配置此服务时,指 定标签,并且配置对应的容忍
配备了特殊硬件的节点:同上
节点驱逐:当节点出现问题时,通过添加污点来驱逐节点上的pod
内置污点
node.kubernetes.io/not-ready
:节点未准备好。这相当于节点状况Ready
的值为 “False
”。node.kubernetes.io/unreachable
:节点控制器访问不到节点. 这相当于节点状况Ready
的值为 “Unknown
”。node.kubernetes.io/memory-pressure
:节点存在内存压力。node.kubernetes.io/disk-pressure
:节点存在磁盘压力。node.kubernetes.io/pid-pressure
: 节点的 PID 压力。node.kubernetes.io/network-unavailable
:节点网络不可用。node.kubernetes.io/unschedulable
: 节点不可调度。node.cloudprovider.kubernetes.io/uninitialized
:如果 kubelet 启动时指定了一个“外部”云平台驱动, 它将给当前节点添加一个污点将其标志为不可用。在 cloud-controller-manager 的一个控制器初始化这个节点后,kubelet 将删除这个污点。