目标:
弄清楚亲和性与反亲和性原理
1 亲和性与反亲和性
2 样例
3 总结
1 亲和性与反亲和性
PodAffinity: pod亲和与互斥调度策略
requiredDuringSchedulingIgnoredDuringExecution
作用:
限制pod所能运行的节点,根据节点本身的标签判断调和度。
原理:
在具有标签X的节点上运行1个或多个符合条件Y的pod,那么pod应该(
如果是互斥,就拒绝)运行在这个节点上。
X: 指集群中的节点,区域等,可通过节点标签中的key声明;
key的名字: topologyKey,表达节点所属的topology范围。
种类:
kubernetes.io/hostname
failure-domain.beta.kubernetes.io/zone
failure-domain.beta.kubernetes.io/region
pod属于命名空间,条件Y表达的是Label Selector。
pod亲和互斥的条件设置也是:
requiredDuringSchedulingIgnoredDuringExecution和
prefferedDuringSchedulingIgnoredDuringExecution。
pod亲和性定义与pod sepc的affinity字段的podAffinity子字段里。
pod的互斥性定义则定义于同一层次的podAntiAffinity子字段中。
查看ceilometer-api的反亲和性
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
application: ceilometer
component: api
release_group: ceilometer
name: ceilometer-api
......
spec:
template:
metadata:
labels:
application: ceilometer
component: api
release_group: ceilometer
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: release_group
operator: In
values:
- ceilometer
- key: application
operator: In
values:
- ceilometer
- key: component
operator: In
values:
- api
topologyKey: kubernetes.io/hostname
......
解释:
Selector(选择器):
.spec.selector是可选字段,用来指定 label selector ,圈定Deployment管理的pod范围。
如果被指定, .spec.selector 必须匹配 .spec.template.metadata.labels,否则它将被API拒绝。
如果 .spec.selector 没有被指定, .spec.selector.matchLabels 默认是.spec.template.metadata.labels。
requiredDuringSchedulingIgnoredDuringExecution: 必须的规则,满足必须的规则的pod才会被调度到特定的Node上。
preferredDuringSchedulingIgnoredDuringExecution: 软约束,不一定满足。
topologyKey:调度时指定区域,如果用node标签实现,则指定为kubernetes.io/hostname。
IgnoredDuringExecution: 如果Node上的标签发生更改,并且反亲和性不再满足时会忽略。而不会在节点上驱逐不再匹配规则的pod。
反亲和性一般用来将pod分不在不同的节点上。
参考:
https://blog.csdn.net/weixin_33994429/article/details/92837489
https://www.kubernetes.org.cn/1890.html
https://blog.csdn.net/tiger435/article/details/78489369
2 样例
一个deployment的yaml文件内容如下:
{
{- if .Values.manifests.statefulset_st2actionrunner }}
{
{- $envAll := . }}
{
{- $dependencies := .Values.dependencies.actionrunner }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dozer-st2actionrunner
spec:
selector:
matchLabels:
app: dozer-st2actionrunner
replicas: {
{ .Values.pod.replicas.st2actionrunner }}
{
{ tuple $envAll | include "helm-toolkit.snippets.kubernetes_upgrades_deployment" | indent 2 }}
template:
metadata:
labels:
app: dozer-st2actionrunner
annotations:
configmap-etc-hash: {
{ tuple "configmap-etc.yaml" . | include "helm-toolkit.utils.hash" }}
configmap-postgres-hash: {
{ tuple "configmap-postgres.yaml" . | include "helm-toolkit.utils.hash" }}
configmap-rabbitmq-hash: {
{ tuple "configmap-rabbitmq.yaml" . | include "helm-toolkit.utils.hash" }}
configmap-st2-hash: {
{ tuple "configmap-st2.yaml" . | include "helm-toolkit.utils.hash" }}
spec:
nodeSelector:
{
{ .Values.labels.node_selector_key }}: {
{ .Values.labels.node_selector_value }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- dozer-st2actionrunner
topologyKey: "kubernete