k8s之资源调度的方式

1. 容器资源的限制

// 表示最多使用多少cpu和内存资源

  • resources.limits.cpu
  • resources.limits.memory

// 表示最少使用的CPU和内存资源

  • resources.requests.cpu
  • resources.requests.memory
[root@master ~]# kubectl describe nodes node1.example.com  //节点的信息
省略N行
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (5%)  100m (5%)  //cpu使用率
  memory             50Mi (1%)  50Mi (1%)  //内存使用情况
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)


[root@master test]# cat test.yml 
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: b1
    image: busybox
    command: ["/bin/sh","-c","sleep 9000"]
    resources:
      requests:
        memory: 100Mi
        cpu: 0.01
    env:
    - name: HN
      valueFrom:
        fieldRef:
          fieldPath: status.podIPs

[root@master test]# kubectl apply -f test.yml  //创建一个pod进行测试
pod/test created

[root@master test]# kubectl get pod -o wide  //运行再node1上
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
test   1/1     Running   0          57s   10.244.1.60   node1.example.com   <none>           <none>

[root@master haproxy]# kubectl describe nodes node1.example.com
  Resource           Requests    Limits
  --------           --------    ------
  cpu                110m (5%)   100m (5%)  //cpu变成了110,这是再原来的基础上增加的,原来是100,而我们设置requests为0.01就是增加十,因为默认100才能让node1运行,所以设置最少0.01就是在此基础上加10
  memory             150Mi (5%)  50Mi (1%)// 内存变为150,同理
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)


[root@master test]# cat test.yml 
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: b1
    image: busybox
    command: ["/bin/sh","-c","sleep 9000"]
    resources:
      requests:
        memory: 30Mi
        cpu: 0.001
      limits:
        memory: 40Mi
        cpu: 0.01
    env:
    - name: HN
      valueFrom:
        fieldRef:
          fieldPath: status.podIPs

2. nodeSelector节点选择器

nodeSelector:用于将Pod调度到匹配Label的Node上,如果没有匹配的标签会调度失败。
作用:

  • 约束Pod到特定的节点运行
  • 完全匹配节点标签
    应用场景:
  • 专用节点:根据业务线将Node分组管理
  • 配备特殊硬件:部分Node配有SSD硬盘、GPU
// 查看所有节点的标签
[root@master test]# kubectl get nodes --show-labels
NAME                 STATUS   ROLES                  AGE    VERSION   LABELS
master.example.com   Ready    control-plane,master   6d3h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master.example.com,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node1.example.com    Ready    <none>                 6d2h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.example.com,kubernetes.io/os=linux
node2.example.com    Ready    <none>                 6d2h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.example.com,kubernetes.io/os=linux

// 线运行一个pod,查看

[root@master test]# kubectl apply -f test.yml 
pod/test created
[root@master test]# cat test.yml 
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: b1
    image: busybox
    command: ["/bin/sh","-c","sleep 9000"]
    env:
    - name: HN
      valueFrom:
        fieldRef:
          fieldPath: status.podIPs


[root@master haproxy]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          23m   10.244.1.61   node1.example.com   <none>           <none>  //运行在node1
web01-6cd64b79f6-gfkjk   1/1     Running   0          15s   10.244.2.51   node2.example.com   <none>           <none>

[root@master test]# cat test.yml  
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: b1
    image: busybox
    command: ["/bin/sh","-c","sleep 9000"]
    env:
    - name: HN
      valueFrom:
        fieldRef:
          fieldPath: status.podIPs
  nodeSelector:
    disktype: ssd //选择运行在有ssd的类型的标签的节点上

[root@master test]# kubectl label nodes node2.example.com disktype=ssd  //给node2打上此标签
node/node2.example.com labeled

[root@master test]# kubectl get nodes node2.example.com --show-labels  //查看标签
NAME                STATUS   ROLES    AGE    VERSION   LABELS
node2.example.com   Ready    <none>   6d3h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.example.com,kubernetes.io/os=linux

[root@master ~]# kubectl get pod -o wide  //此时只有一个
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
web01-6cd64b79f6-gfkjk   1/1     Running   0          15m   10.244.2.51   node2.example.com   <none>           <none>

[root@master test]# kubectl apply -f test.yml 
pod/test created

[root@master ~]# kubectl get pod -o wide  // 发现都运行在node2上
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          20s   10.244.2.52   node2.example.com   <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          16m   10.244.2.51   node2.example.com   <none>           <none>

[root@master test]# kubectl label nodes node2.example.com disktype-  //删除标签的方式

[root@master test]# kubectl apply -f test.yml  //再次创建,因为没有标签所以会处于等待状态
pod/test created

[root@master test]# kubectl get nodes node2.example.com --show-labels
NAME                STATUS   ROLES    AGE    VERSION   LABELS
node2.example.com   Ready    <none>   6d3h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.example.com,kubernetes.io/os=linux

[root@master test]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
test                     0/1     Pending   0          5s
web01-6cd64b79f6-gfkjk   1/1     Running   0          40m

[root@master test]# kubectl label nodes node2.example.com disktype=ssd //打上标签之后运行到node2
node/node2.example.com labeled  

[root@master test]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          2m51s   10.244.2.54   node2.example.com   <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          43m     10.244.2.51   node2.example.com   <none>           <none>

3. nodeAffinity节点亲和性

nodeAffinity:节点亲和性,与nodeSeletor作用一样,但相比更灵活,满足更多条件,如:

  • 匹配有更多的逻辑组合,不只是字符串的完全相等
  • 调度分为软策略和硬策略,而不是硬性要求
    • 硬(required):必须满足
    • 软(preferred):尝试满足,但是不保证
      操作符:In(key in value //表示key包含value)、Noth、Exists、DoesNotExist()、Gt(大于)、Lt(小于)
[root@master test]# cat test.yml //使用硬策略必须满足
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: b1
    image: busybox
    command: ["/bin/sh","-c","sleep 9000"]
    env:
    - name: HN
      valueFrom:
        fieldRef:
          fieldPath: status.podIPs
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values: 
            - ssd

[root@master test]# kubectl get nodes --show-labels  //都没有distype=ssd的标签
NAME                 STATUS   ROLES                  AGE    VERSION   LABELS
master.example.com   Ready    control-plane,master   6d5h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master.example.com,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node1.example.com    Ready    <none>                 6d4h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.example.com,kubernetes.io/os=linux
node2.example.com    Ready    <none>                 6d4h   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.example.com,kubernetes.io/os=linux

[root@master test]# kubectl apply -f test.yml   //所以没有在任何节点运行
pod/test created
[root@master test]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
test                     0/1     Pending   0          6s    <none>        <none>              <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          73m   10.244.2.51   node2.example.com   <none>           <none>

[root@master test]# cat test.yml //加一条软策略
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - name: b1
    image: busybox
    command: ["/bin/sh","-c","sleep 9000"]
    env:
    - name: HN
      valueFrom:
        fieldRef:
          fieldPath: status.podIPs
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values: 
            - ssd
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 3
        preference:
          matchExpressions:
          - key: gpu
            operator: In
            values: 
          - nvdia

// disktype=ssd为硬策略
[root@master test]# kubectl label nodes node1.example.com disktype=ssd gpu=nvdia  //给node1打两个标签
node/node1.example.com labeled

[root@master test]# kubectl label nodes node2.example.com disktype=sshd //给node2打上一个标签
node/node2.example.com labeled

[root@master test]# kubectl apply -f test.yml //此时他将会运行到node1上
pod/test created

[root@master test]# kubectl label nodes node1.example.com gpu-  //删除node1上的gpu
node/node1.example.com labeled

[root@master test]# kubectl label nodes node2.example.com gpu=nvdia //添加一个标签
node/node2.example.com labeled

[root@master test]# kubectl apply -f test.yml  //此时pod将运行在node2上
pod/test created

[root@master test]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          28s   10.244.2.55   node2.example.com   <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          93m   10.244.2.51   node2.example.com   <none>           <none>

4. taint污点&tolrations污点容忍

taint:避免pod调度到特定的Node上
tolerations:允许Pod调度到持有taints的Node上
应用场景:

  • 专用节点:根据业务线将Node分组管理,希望在默认情况下不调度该节点,只有配置了污点容忍才允许分配
  • 配备特殊硬件:部分Node配有SSD硬盘、gpu,希望在默认情况下不调度该节点,只有配置了污点容忍才允许分配
  • 基于taint的驱逐

格式:kubectl taint node[node]key=value:[effect]
例如:kubectl taint node k8s-node1 gpu=yes:NoSchedule
验证:kubectl describe node k8s-node1 | grep Taint

其中[effect]可取的值
NoSchedule:一定不能被调度
PreferNcSohedule:尽量不要调度,非必须配置容忍
NoExecute:不仅不会调度,还会驱逐Node上已有的Pod

去掉污点:
kubectl taint node [node] key:[effect]-

[root@master test]# kubectl describe node node1.example.com | grep -i taint //查看节点的方式
Taints:             <none>

[root@master test]# kubectl taint node node1.example.com node1:NoSchedule //给node1添加一个污点
node/node1.example.com tainted

[root@master test]# kubectl describe node node1.example.com | grep -i taint 
Taints:             node1:NoSchedule

[root@master test]# kubectl get pod -o wide  // 在没污点前运行在node2
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          19s    10.244.2.56   node2.example.com   <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          125m   10.244.2.51   node2.example.com   <none>           <none>

[root@master test]# kubectl taint node node2.example.com node2:NoSchedule //给node2加污点
node/node2.example.com tainted

[root@master test]# kubectl describe node node2.example.com | grep -i taint
Taints:             node1:NoSchedule

[root@master test]# kubectl apply -f test.yml 
pod/test created

[root@master test]# kubectl get pod -o wide  //发现在node1上去运行了,因为node2上有污点了
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          17s    10.244.1.63   node1.example.com   <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          129m   10.244.2.51   node2.example.com   <none>           <none>

[root@master test]# kubectl taint node node2.example.coomnode1:NoSchedule-  //删除node2上的污点
node/node2.example.com untainted

[root@master test]# kubectl describe node node2.example.com | grep -i taint
Taints:             <none>

[root@master test]# kubectl apply -f test.yml //再次创建
pod/test created

[root@master test]# kubectl get pod -o wide  //发现又再node2上运行了
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          29s    10.244.2.57   node2.example.com   <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          134m   10.244.2.51   node2.example.com   <none>           <none>


[root@master test]# kubectl taint node node2.example.com node2:PreferNoSchedule  //这个污点表示尽量不再node2上运行,但是可以在node2运行
node/node2.example.com tainted

[root@master test]# kubectl apply -f test.yml 
pod/test created

[root@master test]# kubectl get pod -o wide  //发现他运行再node1上
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          23s    10.244.1.64   node1.example.com   <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          139m   10.244.2.51   node2.example.com   <none>           <none>

[root@master test]# kubectl label nodes node2.example.com disktype=ssd // 在
node/node2.example.com labeled

[root@master test]# kubectl get pod -o wide  //没打标签前再node2上运行
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE                NOMINATED NODE   READINESS GATES
test                     1/1     Running   0          11m    10.244.2.58   node2.example.com   <none>           <none>
web01-6cd64b79f6-gfkjk   1/1     Running   0          158m   10.244.2.51   node2.example.com   <none>           <none>

[root@master test]# kubectl taint node node2.example.com node2:NoExecute  //打上标签,不仅不会再node2上运行还会驱逐到node1上运行
node/node2.example.com tainted

[root@master test]# kubectl get pod -o wide  //因为我们定义我是自主式pod,再退出之后就会被删除,所以这里没有
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
web01-6cd64b79f6-v6gxd   1/1     Running   0          3m30s   10.244.1.65   node1.example.com   <none>           <none>
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
KubernetesK8s)是一种用于容器编排和管理的开源平台,它提供了多维资源调度算法来实现高效的资源管理和利用。 K8s使用多维资源调度算法来平衡集群中所有节点的资源负载,确保每个节点能够充分利用其可用的计算和存储资源。这些资源包括CPU、内存、存储和网络带宽等。K8s通过采集集群中每个节点的资源使用情况,并将其报告给调度策略,从而实现资源的智能分配。 在K8s中,多维资源调度算法主要涉及以下几个方面: 1. 资源分配:K8s通过分配节点上的资源来满足容器的需求。调度器会考虑所有容器资源需求,并将其分配到合适的节点上。该算法会根据容器资源请求和节点的可用资源进行匹配,从而避免资源的浪费和不平衡。 2. 负载均衡:K8s通过负载均衡算法将容器分散到不同的节点上,以避免资源瓶颈和单点故障。该算法会根据节点的负载情况和容器资源需求,将容器分配到最佳节点上,从而实现负载的均衡。 3. 弹性调度K8s具有弹性调度的能力,可以根据节点的可用资源容器的优先级,自动对容器进行调度。当集群容量不足或节点发生故障时,该算法可以自动将容器从一个节点迁移到另一个节点上,以确保容器的正常运行。 总之,K8s的多维资源调度算法是为了实现高效、均衡和可靠的资源管理。通过合理分配和调度容器资源需求,K8s可以最大化地利用集群的资源,并提供高可用性和可伸缩性的应用环境。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值