[单master节点k8s部署]7.pod亲和性/反亲和性

node亲和性

设置node亲和性的yaml文件 

apiVersion: v1
kind: Pod
metadata:
  name: tomcat-test
  namespace: default
  labels:
    app:  tomcat
spec:
  containers:
  - name:  tomcat-java
    ports:
    - containerPort: 8080
    image: xianchao/tomcat-8.5-jre8:v1
    imagePullPolicy: IfNotPresent
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: node-role.kubernetes.io/node1
            operator: Exists
pod亲和性

设置pod亲和性的yaml文件一般需要标明他需要亲和的pod的信息,下面是两个yaml文件的信息

apiVersion: v1
kind: Pod
metadata:
  name: first
  labels:
    app: first

spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
 
apiVersion: v1
kind: Pod
metadata:
  name: second
  labels:
    app: second

spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
  affinity:
    podAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 80
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values: ["first"]
          topologyKey: kubernetes.io/hostname

 查看两个pod是不是在同一个节点,发现确实在相同的node上

其中的topologyKey的作用是通过查看这一标签判断两个node是否相同,如判断亲和性的时候,查看kubernetes.io/hostname标签,就会发现一个是node1,一个是node2

ubectl get nodes --show-labels
NAME     STATUS   ROLES                  AGE   VERSION   LABELS
master   Ready    control-plane,master   8d    v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
node1    Ready    node1                  8d    v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux,node-role.kubernetes.io/node1=,region=us-west
node2    Ready    <none>                 8d    v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
反亲和性

将另一个pod设置为绝对不和别的pod一个节点,设置podAntiAffinity字段

apiVersion: v1
kind: Pod
metadata:
  name: second
  labels:
    app: second

spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values: ["first"]
        topologyKey: kubernetes.io/hostname

此时如果设置每一个node的label都为a=b,并且设置topologyKey:a,则由于反亲和性规则,新的pod将没有地方放

容忍污点

如果设置了反亲和性,那么有的时候因为一些其他默认规则的存在,pod就会面临无法调度到任何节点的困境,变成挂起状态,此时可以使用tolerations字段。常见的污点节点有:负载较高的节点,用于特殊用途的节点和维护节点等,比如不同的污点节点有不同的等级和value

NoSchedule只是影响新调度来的pod(拒绝放置),而NoExecute会影响已经部署的pod(不能正常运行)

kubectl taint nodes <node-name> key=high-load:PreferNoSchedule
负载较高,最好不要部署
kubectl taint nodes <node-name> environment=test:NoSchedule
用于测试,不要部署
kubectl taint nodes <node-name> dedicated=database:NoExecute
数据库节点(特殊用途),不要部署

但是在yaml文件中设置容忍度的话,就可以部署

apiVersion: v1
kind: Pod
metadata:
  name: second
  labels:
    app: second

spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
  tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values: ["first"]
        topologyKey: kubernetes.io/hostname

这个yaml的意思是为key等于master的不可调度的节点添加一个容忍,有了这个以后,新的pod可以被创建到master节点上(不推荐)

当我们查看别的kubernetes controller的时候,会发现,有一些关键的功能,它的toleration是所有

kubectl get daemonset kube-proxy -o yaml -n kube-system > 1.yaml
[root@master yam_files]# cat 1.yaml
>
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "1"
  creationTimestamp: "2024-06-15T12:35:00Z"
  generation: 1
  labels:
    k8s-app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "3287"
  uid: 7a0b2c38-272a-4ed9-aec5-6c2cd793064a
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kube-proxy
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: kube-proxy
    spec:
      containers:
      - command:
        - /usr/local/bin/kube-proxy
        - --config=/var/lib/kube-proxy/config.conf
        - --hostname-override=$(NODE_NAME)
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1
        imagePullPolicy: IfNotPresent
        name: kube-proxy
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/kube-proxy
          name: kube-proxy
        - mountPath: /run/xtables.lock
          name: xtables-lock
        - mountPath: /lib/modules
          name: lib-modules
          readOnly: true
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kube-proxy
      serviceAccountName: kube-proxy
      terminationGracePeriodSeconds: 30
      tolerations:
      - operator: Exists
给节点加污点
[root@master yam_files]# kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-677cd97c8d-7s9nz   1/1     Running   0          8d    10.244.166.129   node1    <none>           <none>
calico-node-h6hzf                          1/1     Running   0          8d    100.64.252.90    master   <none>           <none>
calico-node-mvgpv                          1/1     Running   0          8d    100.64.147.209   node2    <none>           <none>
calico-node-vd7q7                          1/1     Running   0          8d    100.64.212.7     node1    <none>           <none>
coredns-6d8c4cb4d-6r5tl                    1/1     Running   0          8d    10.244.166.131   node1    <none>           <none>
coredns-6d8c4cb4d-gnwtr                    1/1     Running   0          8d    10.244.166.130   node1    <none>           <none>
etcd-master                                1/1     Running   0          8d    100.64.252.90    master   <none>           <none>
kube-apiserver-master                      1/1     Running   0          8d    100.64.252.90    master   <none>           <none>
kube-controller-manager-master             1/1     Running   0          8d    100.64.252.90    master   <none>           <none>
kube-proxy-4v78m                           1/1     Running   0          8d    100.64.212.7     node1    <none>           <none>
kube-proxy-g8c56                           1/1     Running   0          8d    100.64.252.90    master   <none>           <none>
kube-proxy-ln8gd                           1/1     Running   0          8d    100.64.147.209   node2    <none>           <none>
kube-scheduler-master                      1/1     Running   0          8d    100.64.252.90    master   <none>           <none>
[root@master yam_files]# kubectl taint nodes node1 a=b:NoExecute
node/node1 tainted
[root@master yam_files]# kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS              RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-677cd97c8d-jbbz5   0/1     ContainerCreating   0          4s    <none>           node2    <none>           <none>
calico-node-h6hzf                          1/1     Running             0          8d    100.64.252.90    master   <none>           <none>
calico-node-mvgpv                          1/1     Running             0          8d    100.64.147.209   node2    <none>           <none>
calico-node-vd7q7                          1/1     Running             0          8d    100.64.212.7     node1    <none>           <none>
coredns-6d8c4cb4d-6r5tl                    1/1     Terminating         0          8d    10.244.166.131   node1    <none>           <none>
coredns-6d8c4cb4d-8fxqf                    0/1     ContainerCreating   0          4s    <none>           node2    <none>           <none>
coredns-6d8c4cb4d-gnwtr                    1/1     Terminating         0          8d    10.244.166.130   node1    <none>           <none>
coredns-6d8c4cb4d-n5mrk                    1/1     Running             0          4s    10.244.219.66    master   <none>           <none>
etcd-master                                1/1     Running             0          8d    100.64.252.90    master   <none>           <none>
kube-apiserver-master                      1/1     Running             0          8d    100.64.252.90    master   <none>           <none>
kube-controller-manager-master             1/1     Running             0          8d    100.64.252.90    master   <none>           <none>
kube-proxy-4v78m                           1/1     Running             0          8d    100.64.212.7     node1    <none>           <none>
kube-proxy-g8c56                           1/1     Running             0          8d    100.64.252.90    master   <none>           <none>
kube-proxy-ln8gd                           1/1     Running             0          8d    100.64.147.209   node2    <none>           <none>
kube-scheduler-master                      1/1     Running             0          8d    100.64.252.90    master   <none>           <none>

给节点1加上noExecute污点以后,会发现calico的pod会转移至别的节点,而有一些pod直接停止运行

root@master yam_files]# kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-677cd97c8d-jbbz5   1/1     Running   0          2m12s   10.244.104.14    node2    <none>           <none>
calico-node-h6hzf                          1/1     Running   0          8d      100.64.252.90    master   <none>           <none>
calico-node-mvgpv                          1/1     Running   0          8d      100.64.147.209   node2    <none>           <none>
calico-node-vd7q7                          1/1     Running   0          8d      100.64.212.7     node1    <none>           <none>
coredns-6d8c4cb4d-8fxqf                    1/1     Running   0          2m12s   10.244.104.15    node2    <none>           <none>
coredns-6d8c4cb4d-n5mrk                    1/1     Running   0          2m12s   10.244.219.66    master   <none>           <none>
etcd-master                                1/1     Running   0          8d      100.64.252.90    master   <none>           <none>
kube-apiserver-master                      1/1     Running   0          8d      100.64.252.90    master   <none>           <none>
kube-controller-manager-master             1/1     Running   0          8d      100.64.252.90    master   <none>           <none>
kube-proxy-4v78m                           1/1     Running   0          8d      100.64.212.7     node1    <none>           <none>
kube-proxy-g8c56                           1/1     Running   0          8d      100.64.252.90    master   <none>           <none>
kube-proxy-ln8gd                           1/1     Running   0          8d      100.64.147.209   node2    <none>           <none>
kube-scheduler-master                      1/1     Running   0          8d      100.64.252.90    master   <none>           <none>

最后发现calico的pod转移到了node2,coredns的转移到了master和node2,但是kubeproxy还在node1上,查看他们的yaml文件就可以发现,toleration是exits。

消除节点污点
kubectl describe node node1 | grep Taints
Taints:             a=b:NoExecute
[root@master yam_files]# kubectl taint node node1 a=b:NoExecute-

  • 5
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值