企业项目实战k8s篇(十三)k8s容器资源限制

31 篇文章 4 订阅
21 篇文章 8 订阅

一.k8s容器资源限制

Kubernetes采用request和limit两种限制类型来对资源进行分配。
request(资源需求):即运行Pod的节点必须满足运行Pod的最基本需求才能运行Pod。
limit(资源限额):即运行Pod期间,可能内存使用量会增加,那最多能使用多少内存,这就是资源限额。

资源类型:
CPU 的单位是核心数,内存的单位是字节。
一个容器申请0.5个CPU,就相当于申请1个CPU的一半,你也可以加个后缀m 表示千分之一的概念。比如说100m的CPU,100豪的CPU和0.1个CPU都是一样的。
内存单位:
K、M、G、T、P、E #通常是以1000为换算标准的。
Ki、Mi、Gi、Ti、Pi、Ei #通常是以1024为换算标准的。

二.内存限制

  • 如果容器超过其内存限制,则会被终止。如果可重新启动,则与所有其他类型的运行时故障一样,kubelet 将重新启动它。

  • 如果一个容器超过其内存请求,那么当节点内存不足时,它的 Pod 可能被逐出。

最低需求50M,最大100M,部署200M内存占用pod


[root@server1 limit]# cat mem.yml 
apiVersion: v1
kind: Pod
metadata:
  name: memory-demo
spec:
  containers:
  - name: memory-demo
    image: stress
    args:
    - --vm
    - "1"
    - --vm-bytes
    - 200M
      resources:
      requests:
        memory: 50Mi
      limits:
        memory: 100Mi

pod状态为OOMKilled,日志显示拒绝并kill

[root@server1 limit]# kubectl  apply  -f mem.yml 
pod/memory-demo created
[root@server1 limit]# kubectl  get pod
NAME          READY   STATUS      RESTARTS   AGE
memory-demo   0/1     OOMKilled   1          4s
[root@server1 limit]# kubectl  logs memory-demo 
stress: FAIL: [1] (416) <-- worker 6 got signal 9
stress: WARN: [1] (418) now reaping child worker processes
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogvm worker 1 [6] forked
stress: FAIL: [1] (422) kill error: No such process
stress: FAIL: [1] (452) failed run completed in 0s

三.cpu限制

  • 调度失败是因为申请的CPU资源超出集群节点所能提供的资源
  • 但CPU 使用率过高,不会被杀死

cpu限制为4-10,部署使用2个cpu

[root@server1 limit]# cat cpu.yml 
apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo
spec:
  containers:
  - name: cpu-demo
    image: stress
    resources:
      limits:
        cpu: "10"
      requests:
        cpu: "4"
    args:
    - -c
    - "2

pod处于pending状态,部署失败

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.
  Warning  FailedScheduling  15s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.
[root@server1 limit]# kubectl  apply  -f cpu.yml 
pod/cpu-demo created
[root@server1 limit]# kubectl  get pod 
NAME       READY   STATUS    RESTARTS   AGE
cpu-demo   0/1     Pending   0          10s
[root@server1 limit]# kubectl  describe  pod cpu-demo 
Name:         cpu-demo
Namespace:    default
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  cpu-demo:
    Image:      stress
    Port:       <none>
    Host Port:  <none>
    Args:
      -c
      2
    Limits:
      cpu:  10
    Requests:
      cpu:        4
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bjc5h (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-bjc5h:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.
  Warning  FailedScheduling  15s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.

四.namespace限制

1.为namespace设置资源限制

  • LimitRange 在 namespace 中施加的最小和最大内存限制只有在创建和更新 Pod 时才会被应用。改变 LimitRange 不会对之前创建的 Pod 造成影响。
apiVersion: v1
kind: LimitRange
metadata:
  name: limitrange-memory
spec:
  limits:
  - default:
      cpu: 0.5
      memory: 512Mi
    defaultRequest:
      cpu: 0.1
      memory: 256Mi
    max:
      cpu: 1
      memory: 1Gi
    min:
      cpu: 0.1
      memory: 100Mi
    type: Container

查看资源限制

[root@server1 limit]# kubectl  apply  -f name.yml 
limitrange/limitrange-memory created
[root@server1 limit]# kubectl  describe  limitranges 
Name:       limitrange-memory
Namespace:  default
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   memory    100Mi  1Gi  256Mi            512Mi          -
Container   cpu       100m   1    100m             500m           -

2.为namespace设置资源配额

创建的ResourceQuota对象将在default名字空间中添加以下限制:
每个容器必须设置内存请求(memory request),内存限额(memory limit),cpu请求(cpu request)和cpu限额(cpu limit)。

  • 所有容器的内存请求总额不得超过1 GiB。
  • 所有容器的内存限额总额不得超过2 GiB。
  • 所有容器的CPU请求总额不得超过1 CPU。
  • 所有容器的CPU限额总额不得超过2 CPU。
apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-cpu-demo
spec:
  hard:
    requests.cpu: "1"
    requests.memory: 1Gi
    limits.cpu: "2"
    limits.memory: 2Gi

先创建内存占用500M 的pod,创建成功,查看配额使用情况

[root@server1 limit]# kubectl  apply  -f mem.yml 
pod/memory-demo created
[root@server1 limit]# kubectl  get pod
NAME          READY   STATUS    RESTARTS   AGE
memory-demo   1/1     Running   0          4s
myapp         1/1     Running   0          5h21m
[root@server1 limit]# kubectl  describe  resourcequotas 
Name:            mem-cpu-demo
Namespace:       default
Resource         Used    Hard

--------         ----    ----

limits.cpu       500m    2
limits.memory    1000Mi  2Gi
requests.cpu     100m    1
requests.memory  500Mi   1Gi

再次部署内存占用600M pod,异常,显示超过限制

[root@server1 limit]# vim name.yml 
[root@server1 limit]# vim mem.yml 
[root@server1 limit]# kubectl  apply  -f mem.yml 
Error from server (Forbidden): error when creating "mem.yml": pods "memory-demo-2" is forbidden: exceeded quota: mem-cpu-demo, requested: requests.memory=600Mi, used: requests.memory=500Mi, limited: requests.memory=1Gi

3. Namespace 配置Pod配额

设置Pod配额以限制可以在namespace中运行的Pod数量,限制pod数量为2

apiVersion: v1
kind: ResourceQuota
metadata:
  name: pod-demo
spec:
  hard:
    pods: "2"

查看限制

[root@server1 limit]# kubectl  describe  resourcequotas 
Name:            mem-cpu-demo
Namespace:       default
Resource         Used  Hard
--------         ----  ----
limits.cpu       0     2
limits.memory    0     2Gi
requests.cpu     0     1
requests.memory  0     1Gi


Name:       pod-demo
Namespace:  default
Resource    Used  Hard
--------    ----  ----
pods        1     2
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值