Pods高级实战(二)

1 node节点亲和性

节点亲和性概念上类似于 nodeSelector, 它使你可以根据节点上的标签来约束 Pod 可以调度到哪些节点上。 节点亲和性有两种:

  • requiredDuringSchedulingIgnoredDuringExecution: 调度器只有在规则被满足的时候才能执行调度。此功能类似于 nodeSelector, 但其语法表达能力更强。
  • preferredDuringSchedulingIgnoredDuringExecution: 调度器会尝试寻找满足对应规则的节点。如果找不到匹配的节点,调度器仍然会调度该 Pod。
    (prefered表示有节点尽量满足这个位置定义的亲和性,这不是一个必须的条件,软亲和性;

require表示必须有节点满足这个位置定义的亲和性,这是个硬性条件,硬亲和性)

[root@master1 ~]# kubectl explain pods.spec.affinity
KIND:     Pod
VERSION:  v1

RESOURCE: affinity <Object>

DESCRIPTION:
     If specified, the pod's scheduling constraints

     Affinity is a group of affinity scheduling rules.

FIELDS:
   nodeAffinity <Object>
     Describes node affinity scheduling rules for the pod.

   podAffinity  <Object>
     Describes pod affinity scheduling rules (e.g. co-locate this pod in the
     same node, zone, etc. as some other pod(s)).

   podAntiAffinity      <Object>
     Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod
     in the same node, zone, etc. as some other pod(s)).
[root@master1 ~]# kubectl explain pods.spec.affinity.nodeAffinity
KIND:     Pod
VERSION:  v1

RESOURCE: nodeAffinity <Object>

DESCRIPTION:
     Describes node affinity scheduling rules for the pod.

     Node affinity is a group of node affinity scheduling rules.

FIELDS:
   preferredDuringSchedulingIgnoredDuringExecution      <[]Object>
     The scheduler will prefer to schedule pods to nodes that satisfy the
     affinity expressions specified by this field, but it may choose a node that
     violates one or more of the expressions. The node that is most preferred is
     the one with the greatest sum of weights, i.e. for each node that meets all
     of the scheduling requirements (resource request, requiredDuringScheduling
     affinity expressions, etc.), compute a sum by iterating through the
     elements of this field and adding "weight" to the sum if the node matches
     the corresponding matchExpressions; the node(s) with the highest sum are
     the most preferred.

   requiredDuringSchedulingIgnoredDuringExecution       <Object>
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to an update), the system may or may not try
     to eventually evict the pod from its node.
[root@master1 ~]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution
KIND:     Pod
VERSION:  v1

RESOURCE: requiredDuringSchedulingIgnoredDuringExecution <Object>

DESCRIPTION:
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to an update), the system may or may not try
     to eventually evict the pod from its node.

     A node selector represents the union of the results of one or more label
     queries over a set of nodes; that is, it represents the OR of the selectors
     represented by the node selector terms.

FIELDS:
   nodeSelectorTerms    <[]Object> -required-
     Required. A list of node selector terms. The terms are ORed.

[root@master1 ~]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms
KIND:     Pod
VERSION:  v1

RESOURCE: nodeSelectorTerms <[]Object>

DESCRIPTION:
     Required. A list of node selector terms. The terms are ORed.

     A null or empty node selector term matches no objects. The requirements of
     them are ANDed. The TopologySelectorTerm type implements a subset of the
     NodeSelectorTerm.

FIELDS:
   matchExpressions     <[]Object>
     A list of node selector requirements by node's labels.

   matchFields  <[]Object>
     A list of node selector requirements by node's fields.
[root@master1 ~]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields
KIND:     Pod
VERSION:  v1

RESOURCE: matchFields <[]Object>

DESCRIPTION:
     A list of node selector requirements by node's fields.

     A node selector requirement is a selector that contains values, a key, and
     an operator that relates the key and values.

FIELDS:
   key  <string> -required-
     The label key that the selector applies to.

   operator     <string> -required-
     Represents a key's relationship to a set of values. Valid operators are In,
     NotIn, Exists, DoesNotExist. Gt, and Lt.

     Possible enum values:
     - `"DoesNotExist"`
     - `"Exists"`
     - `"Gt"`
     - `"In"`
     - `"Lt"`
     - `"NotIn"`

   values       <[]string>
     An array of string values. If the operator is In or NotIn, the values array
     must be non-empty. If the operator is Exists or DoesNotExist, the values
     array must be empty. If the operator is Gt or Lt, the values array must
     have a single element, which will be interpreted as an integer. This array
     is replaced during a strategic merge patch.

key:检查label

operator:做等值选则还是不等值选则

values:给定值

例:使用requiredDuringSchedulingIgnoredDuringExecution硬亲和性

#把myapp-v1.tar.gz上传到node1
[root@node1 ~]# ctr -n=k8s.io images import myapp-v1.tar.gz 
unpacking docker.io/ikubernetes/myapp:v1 (sha256:943e72fda89669b10a29723a10f261836dca71aa9b181d362020bb3b715bc9a1)...done
#编写pod-nodeaffinity-demo.yaml
[root@master1 affinity]# vim pod-nodeaffinity-demo.yaml
[root@master1 affinity]# cat pod-nodeaffinity-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: zone
            operator: In
            values:
            - foo
            - bar
  containers:
  - name: myapp
    image: docker.io/ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
#尝试创建pod
#我们检查当前节点中有任意一个节点拥有zone标签的值是foo或者bar,就可以把pod调度到这个node节点的foo或者bar标签上的节点上
[root@master1 affinity]# kubectl get pods -owide    
NAME                     READY   STATUS    RESTARTS   AGE    IP       NODE     NOMINATED NODE   READINESS GATES
pod-node-affinity-demo   0/1     Pending   0          2m2s   <none>   <none>   <none>           <none>
[root@master1 affinity]# kubectl describe pod pod-node-affinity-demo
Name:             pod-node-affinity-demo
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=myapp
                  tier=frontend
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
  myapp:
    Image:        docker.io/ikubernetes/myapp:v1
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4nxp (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-f4nxp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  2m23s  default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
#提示调度失败
#status的状态是pending,上面说明没有完成调度,因为没有一个拥有zone的标签的值是foo或者bar,而且使用的是硬亲和性,必须满足条件才能完成调度
#给node1打上一个zone=foo的标签,再查看
[root@master1 affinity]# kubectl label node node1 zone=foo
node/node1 labeled

[root@master1 affinity]# kubectl get node --show-labels
NAME      STATUS   ROLES           AGE   VERSION   LABELS
master1   Ready    control-plane   26d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1     Ready    work            26d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux,node-role.kubernetes.io/work=work,zone=foo
[root@master1 affinity]# kubectl get pods -owide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
pod-node-affinity-demo   1/1     Running   0          5m47s   10.244.166.178   node1   <none>           <none>

例:使用preferredDuringSchedulingIgnoredDuringExecution软亲和性

[root@master1 ~]# kubectl explain pod.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution
KIND:     Pod
VERSION:  v1

RESOURCE: preferredDuringSchedulingIgnoredDuringExecution <[]Object>

DESCRIPTION:
     The scheduler will prefer to schedule pods to nodes that satisfy the
     affinity expressions specified by this field, but it may choose a node that
     violates one or more of the expressions. The node that is most preferred is
     the one with the greatest sum of weights, i.e. for each node that meets all
     of the scheduling requirements (resource request, requiredDuringScheduling
     affinity expressions, etc.), compute a sum by iterating through the
     elements of this field and adding "weight" to the sum if the node matches
     the corresponding matchExpressions; the node(s) with the highest sum are
     the most preferred.

     An empty preferred scheduling term matches all objects with implicit weight
     0 (i.e. it's a no-op). A null preferred scheduling term matches no objects
     (i.e. is also a no-op).

FIELDS:
   preference   <Object> -required-
     A node selector term, associated with the corresponding weight.

   weight       <integer> -required-
     Weight associated with matching the corresponding nodeSelectorTerm, in the
     range 1-100.
#编写一个pod-nodeaffinity-demo-2.yaml
[root@master1 affinity]# vim pod-nodeaffinity-demo-2.yaml 
[root@master1 affinity]# cat pod-nodeaffinity-demo-2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity-demo-2
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: docker.io/ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:
          - key: zone1
            operator: In
            values:
            - foo1
            - bar1
        weight: 10
      - preference:
          matchExpressions:
          - key: zone2
            operator: In
            values:
            - foo2
            - bar2
        weight: 20
[root@master1 affinity]# kubectl apply -f pod-nodeaffinity-demo-2.yaml 
pod/pod-node-affinity-demo-2 created
[root@master1 affinity]# kubectl get pods -owide
NAME                       READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod-node-affinity-demo-2   1/1     Running   0          7s    10.244.166.179   node1   <none>           <none>

上面说明软亲和性是可以运行这个pod的,尽管没有运行这个pod的节点定义的zone1,zone2标签

测试weight权重
weight是相对权重,权重越高,pod调度的几率越大

#把master1打上zone1标签,把node1打上zone2标签
[root@master1 affinity]# kubectl label node master1 zone1=foo1
node/master1 labeled
[root@master1 affinity]# kubectl label node node1 zone2=foo2
node/node1 labeled
[root@master1 affinity]# kubectl get nodes --show-labels
NAME      STATUS   ROLES           AGE   VERSION   LABELS
master1   Ready    control-plane   26d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,zone1=foo1
node1     Ready    work            26d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux,node-role.kubernetes.io/work=work,zone2=foo2

[root@master1 affinity]# kubectl get pods -owide
NAME                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
pod-node-affinity-demo-2   1/1     Running   0          9m58s   10.244.166.179   node1   <none>           <none>
#删除原来pod,再修改权重为master1
[root@master1 affinity]# vim pod-nodeaffinity-demo-3.yaml 
[root@master1 affinity]# cat pod-nodeaffinity-demo-3.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity-demo-3
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: docker.io/ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:
          - key: zone1
            operator: In
            values:
            - foo1
            - bar1
        weight: 50
      - preference:
          matchExpressions:
          - key: zone2
            operator: In
            values:
            - foo2
            - bar2
        weight: 10

问题:集群只有工作节点和控制节点两台主机的时候,nodeaffinity权重调度都是调度到工作节点上
已解决:配置pod容忍度可调度到带污点节点上

各项参数解析:
pod.spec.affinity.nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution Node节点必须满足指定的所有规则才可以,相当于硬限制(找不到会调度失败)
nodeSelectorTerms 节点选择列表
matchFields 按节点字段列出的节点选择器要求列表
matchExpressions 按节点标签列出的节点选择器要求列表(推荐)
key 键
values 值
operator 关系符 支持Exists(存在), DoesNotExist(不存在), In(范围), NotIn(范围取反), Gt(大于), Lt(小于)
preferredDuringSchedulingIgnoredDuringExecution 优先调度到满足指定的规则的Node,相当于软限制 (倾向,找不到不会调度失败)
preference 一个节点选择器项,与相应的权重相关联
matchFields 按节点字段列出的节点选择器要求列表
matchExpressions 按节点标签列出的节点选择器要求列表(推荐)
key 键
values 值
operator 关系符 支持In, NotIn, Exists, DoesNotExist, Gt, Lt
weight 倾向权重,在范围1-100。

关系符的使用说明:

  • matchExpressions:
    • key: nodeenv # 匹配存在标签的key为nodeenv的节点,只匹配key就行
      operator: Exists
    • key: nodeenv # 匹配标签的key为nodeenv,且value是"xxx"或"yyy"的节点,key和value都要匹配
      operator: In
      values: [“xxx”,“yyy”]
    • key: nodeenv # 匹配标签的key为nodeenv,且value大于"xxx"的节点
      operator: Gt
      values: “xxx”

NodeAffinity规则设置的注意事项:
    1 如果同时定义了nodeSelector和nodeAffinity,那么必须两个条件都得到满足,Pod才能运行在指定的Node上
    2 如果nodeAffinity指定了多个nodeSelectorTerms,那么只需要其中一个能够匹配成功即可
    3 如果一个nodeSelectorTerms中有多个matchExpressions ,则一个节点必须满足所有的才能匹配成功
    4 如果一个pod所在的Node在Pod运行期间其标签发生了改变,不再符合该Pod的节点亲和性需求,则系统将忽略此变化(即只在调度的时候生效,调度完就不做检查了)

2 Pod亲和性和反亲和性

2.1 pod亲和性

pod自身的亲和性调度有两种表示形式

  • podaffinity:pod和pod更倾向腻在一起,把相近的pod结合到相近的位置,如同一区域,同一机架,这样的话pod和pod之间更好通信,比方说有两个机房,这两个机房部署的集群有1000台主机,那么我们希望把nginx和tomcat都部署同一个地方的node节点上,可以提高通信效率;

  • podunaffinity:pod和pod更倾向不腻在一起,如果部署两套程序,那么这两套程序更倾向于反亲和性,这样相互之间不会有影响**。**

第一个pod随机选则一个节点,做为评判后续的pod能否到达这个pod所在的节点上的运行方式,这就称为pod亲和性;我们怎么判定哪些节点是相同位置的,哪些节点是不同位置的;我们在定义pod亲和性时需要有一个前提,哪些pod在同一个位置,哪些pod不在同一个位置,这个位置是怎么定义的,标准是什么?以节点名称为标准,这个节点名称相同的表示是同一个位置,节点名称不相同的表示不是一个位置。

帮助命令:

[root@master1 ~]# kubectl explain pod.spec.affinity.podAffinity
KIND:     Pod
VERSION:  v1

RESOURCE: podAffinity <Object>

DESCRIPTION:
     Describes pod affinity scheduling rules (e.g. co-locate this pod in the
     same node, zone, etc. as some other pod(s)).

     Pod affinity is a group of inter pod affinity scheduling rules.

FIELDS:
   preferredDuringSchedulingIgnoredDuringExecution      <[]Object>
     The scheduler will prefer to schedule pods to nodes that satisfy the
     affinity expressions specified by this field, but it may choose a node that
     violates one or more of the expressions. The node that is most preferred is
     the one with the greatest sum of weights, i.e. for each node that meets all
     of the scheduling requirements (resource request, requiredDuringScheduling
     affinity expressions, etc.), compute a sum by iterating through the
     elements of this field and adding "weight" to the sum if the node has pods
     which matches the corresponding podAffinityTerm; the node(s) with the
     highest sum are the most preferred.

   requiredDuringSchedulingIgnoredDuringExecution       <[]Object>
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to a pod label update), the system may or
     may not try to eventually evict the pod from its node. When there are
     multiple elements, the lists of nodes corresponding to each podAffinityTerm
     are intersected, i.e. all terms must be satisfied.
- preferredDuringSchedulingIgnoredDuringExecution      <[]Object>  硬亲和性
- requiredDuringSchedulingIgnoredDuringExecution       <[]Object>  软亲和性
[root@master1 ~]# kubectl explain pod.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution
KIND:     Pod
VERSION:  v1

RESOURCE: requiredDuringSchedulingIgnoredDuringExecution <[]Object>

DESCRIPTION:
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to a pod label update), the system may or
     may not try to eventually evict the pod from its node. When there are
     multiple elements, the lists of nodes corresponding to each podAffinityTerm
     are intersected, i.e. all terms must be satisfied.

     Defines a set of pods (namely those matching the labelSelector relative to
     the given namespace(s)) that this pod should be co-located (affinity) or
     not co-located (anti-affinity) with, where co-located is defined as running
     on a node whose value of the label with key <topologyKey> matches that of
     any node on which a pod of the set of pods is running

FIELDS:
   labelSelector        <Object>
     A label query over a set of resources, in this case pods.

   namespaceSelector    <Object>
     A label query over the set of namespaces that the term applies to. The term
     is applied to the union of the namespaces selected by this field and the
     ones listed in the namespaces field. null selector and null or empty
     namespaces list means "this pod's namespace". An empty selector ({})
     matches all namespaces.

   namespaces   <[]string>
     namespaces specifies a static list of namespace names that the term applies
     to. The term is applied to the union of the namespaces listed in this field
     and the ones selected by namespaceSelector. null or empty namespaces list
     and null namespaceSelector means "this pod's namespace".

   topologyKey  <string> -required-
     This pod should be co-located (affinity) or not co-located (anti-affinity)
     with the pods matching the labelSelector in the specified namespaces, where
     co-located is defined as running on a node whose value of the label with
     key topologyKey matches that of any node on which any of the selected pods
     is running. Empty topologyKey is not allowed.

labelSelector:

我们要判断pod跟别的pod亲和,跟哪个pod亲和,需要靠labelSelector,通过labelSelector选则一组能作为亲和对象的pod资源

namespace:

labelSelector需要选则一组资源,那么这组资源是在哪个名称空间中呢,通过namespace指定,如果不指定namespaces,那么就是当前创建pod的名称空间

topologyKey:

[root@master1 ~]# kubectl explain pod.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector
KIND:     Pod
VERSION:  v1

RESOURCE: labelSelector <Object>

DESCRIPTION:
     A label query over a set of resources, in this case pods.

     A label selector is a label query over a set of resources. The result of
     matchLabels and matchExpressions are ANDed. An empty label selector matches
     all objects. A null label selector matches no objects.

FIELDS:
   matchExpressions     <[]Object>
     matchExpressions is a list of label selector requirements. The requirements
     are ANDed.

   matchLabels  <map[string]string>
     matchLabels is a map of {key,value} pairs. A single {key,value} in the
     matchLabels map is equivalent to an element of matchExpressions, whose key
     field is "key", the operator is "In", and the values array contains only
     "value". The requirements are ANDed.

**例1:第一个pod做为基准,第二个pod跟着它走

#创建第一个pod
[root@master1 affinity]# vim pod-required-affinity-demo-1.yaml
[root@master1 affinity]# cat pod-required-affinity-demo-1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-first
  labels:
    app2: myapp2
    tier: frontend

spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
[root@master1 affinity]# kubectl apply -f pod-required-affinity-demo-1.yaml 
pod/pod-first created
[root@master1 affinity]# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
pod-first   1/1     Running   0          4s
[root@master1 affinity]# kubectl get pods -owide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod-first   1/1     Running   0          10s   10.244.166.191   node1   <none>           <none>
[root@master1 affinity]# kubectl get pods --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
pod-first   1/1     Running   0          28s   app2=myapp2,tier=frontend
#创建第二个pod并设置亲和性
[root@master1 affinity]# vim pod-required-affinity-demo-2.yaml
[root@master1 affinity]# cat pod-required-affinity-demo-2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-second
  labels:
    app: backend
    tier: db
spec:
  containers:
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 3600"]
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app2
            operator: In
            values:
            - myapp2
        topologyKey: kubernetes.io/hostname
[root@master1 affinity]# kubectl apply -f pod-required-affinity-demo-2.yaml 
pod/pod-second created
[root@master1 affinity]# kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
pod-first    1/1     Running   0          7m22s
pod-second   1/1     Running   0          4s
[root@master1 affinity]# kubectl get pods -owide
NAME         READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
pod-first    1/1     Running   0          7m27s   10.244.166.191   node1   <none>           <none>
pod-second   1/1     Running   0          9s      10.244.166.132   node1   <none>           <none>

上面说明第一个pod调度到哪,第二个pod也调度到哪,这就是pod节点亲和性

2.2 pod反亲和性

PodAntiAffinity主要实现以运行的Pod为参照,让新创建的Pod跟参照pod不在一个区域中的功能。

实例:定义两个pod,第一个pod做为基准,第二个pod跟它调度节点相反

#创建第一个pod
[root@master1 affinity]# vim pod-required-anti-affinity-demo-1.yaml
[root@master1 affinity]# cat pod-required-anti-affinity-demo-1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-first
  labels:
    app: myapp1
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
[root@master1 affinity]# kubectl apply -f pod-required-anti-affinity-demo-1.yaml 
pod/pod-first created
[root@master1 affinity]# kubectl get pods -owide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod-first   1/1     Running   0          9s    10.244.166.133   node1   <none>           <none>
#创建第二个pod并设置反亲和性参数,并加上容忍度
[root@master1 affinity]# vim pod-required-anti-affinity-demo-2.yaml              
[root@master1 affinity]# cat pod-required-anti-affinity-demo-2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-second
  labels:
    app: backend
    tier: db
spec:
  containers:
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 3600"]
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app1
            operator: In
            values:
            - myapp1
        topologyKey: zone1
  tolerations:
  - effect: "NoSchedule"
    operator: "Exists"
[root@master1 affinity]# kubectl get pods -owide
NAME         READY   STATUS    RESTARTS   AGE     IP               NODE      NOMINATED NODE   READINESS GATES
pod-first    1/1     Running   0          6m11s   10.244.166.143   node1     <none>           <none>
pod-second   1/1     Running   0          18s     10.244.137.70    master1   <none>           <none>

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值