【原文链接】
一、Pod资源配额
1.1 资源配额配置简介
容器中的程序要运行,肯定是要占用一定资源的,比如CPU和内存等,如果不对某个容器的资源做限制,那么它就可能吃掉大量资源,导致气他容器无法运行,针对这种情况,kubernetes提供了对内存和CPU的资源进行配额的机制,这种机制主要通过resources选项类实现,它有两个子选项
- limits:用于限制运行时容器的最大占用资源,当容器占用资源超过limits时会被终止,并进行重启
- requests:用于设置容器需要的最小资源,如果环境资源不够,容器就无法启动
1.2 资源配额配置
如下,编辑pod_base.yaml文件,对nginx容器设置资源上限和下限设置
apiVersion: v1
kind: Namespace
metadata:
name: dev
---
apiVersion: v1
kind: Pod
metadata:
name: pod-resources
namespace: dev
labels:
user: redrose2100
spec:
containers:
- name: nginx
image: nginx:1.17.1
resources:
requests:
cpu: "1"
memory: "100M"
limits:
cpu: "2"
memory: "512M"
使用如下命令创建pod
[root@master pod]# kubectl apply -f pod_resources.yaml
namespace/dev created
pod/pod-resources created
[root@master pod]#
使用如下命令查询pod,发现此时的配额配置,环境是满足要求的,pod是能正常启动的
[root@master pod]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
pod-resources 1/1 Running 0 7s
[root@master pod]#
使用如下命令删除资源
[root@master pod]# kubectl delete -f pod_resources.yaml
namespace "dev" deleted
pod "pod-resources" deleted
[root@master pod]#
1.3 配额超额配置测试
这里可以做个实验,将cpu下限修改为10,上限修改为20,然后再次尝试,因为这里虚拟机的核数是4,下限修改为10后是明显不能满足要求的
apiVersion: v1
kind: Namespace
metadata:
name: dev
---
apiVersion: v1
kind: Pod
metadata:
name: pod-resources
namespace: dev
labels:
user: redrose2100
spec:
containers:
- name: nginx
image: nginx:1.17.1
resources:
requests:
cpu: "10"
memory: "100M"
limits:
cpu: "20"
memory: "512M"
然后使用如下命令创建:
[root@master pod]# kubectl apply -f pod_resources.yaml
namespace/dev created
pod/pod-resources created
[root@master pod]#
再次重新创建后通过如下命令可以看到这里提示cpu不够用了
[root@master pod]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
pod-resources 0/1 Pending 0 17m
[root@master pod]# kubectl describe pod pod-resources -n dev
Name: pod-resources
Namespace: dev
Priority: 0
Node: <none>
Labels: user=redrose2100
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
nginx:
Image: nginx:1.17.1
Port: <none>
Host Port: <none>
Limits:
cpu: 20
memory: 512M
Requests:
cpu: 10
memory: 100M
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8kvvb (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-8kvvb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 38s (x19 over 18m) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.
[root@master pod]#
使用如下命令删除资源
[root@master pod]# kubectl delete -f pod_resources.yaml
namespace "dev" deleted
pod "pod-resources" deleted
[root@master pod]#