Kubernetes 1.20.5实验记录–资源需求及限制
1.1 内存限制
1、创建Pod:
文件resources-memory.yaml
apiVersion: v1
kind: Pod
metadata:
name: resources-memory
spec:
containers:
- name: resources-memory
image: polinux/stress
resources:
requests:
memory: 50Mi
limits:
memory: 100Mi
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"]
kubectl apply -f resources-memory.yaml
2、查看Pod状态:
kubectl get pod -o wide
Pod资源超出限制,被OOMKilled,自动重启
3、删除Pod:
kubectl delete -f resources-memory.yaml
1.2 CPU限制
1、创建Pod:
文件resources-cpu.yaml
apiVersion: v1
kind: Pod
metadata:
name: resources-cpu
spec:
containers:
- name: resources-cpu
image: polinux/stress
resources:
requests:
cpu: 0.5
limits:
cpu: 1
command: ["stress"]
args: ["--cpu", "2"]
kubectl apply -f resources-cpu.yaml
2、查看Pod状态:
kubectl get pod -o wide
3、查看Pod资源使用:
kubectl top pod resources-cpu
Pod CPU限制在1000m CPU内
4、删除Pod:
kubectl delete -f resources-cpu.yaml
1.3 服务质量(QoS)
Kubernetes提供服务质量管理,根据容器的资源配置,将Pod分为Guaranteed,Burstable,BestEffort
3个级别。当资源紧张时根据分级决定调度和驱逐策略,这三个分级分别代表:
(1)Guaranteed:Pod中所有容器都设置了limit和request,并且相等(设置limit后假如没有设置request会自动设置为limit值)
(2)Burstable:Pod中有容器未设置limit,或者limit和request不相等。这种类型的Pod在调度节点时,可能出现节点超频的情况。
(3)BestEffort:Pod中没有任何容器设置request和limit。
1、查看节点资源信息:
kubectl describe node worker1 | grep -A 6 Capacity
kubectl describe node worker2 | grep -A 6 Capacity
2、oom计算方式
Kubernetes会根据QoS设置oom的评分调整参数oom_score_adj,oom_killer根据内存使用情况算出oom_score,并且和oom_score_adj综合评价,进程的评分越高,当发生oom时越优先被kill。
QoS | oom_score_adj |
---|---|
Guaranteed | -997 |
BestEffort | 1000 |
Burstable | min(max(2,1000-(1000*memoryRequestBytes)/machineMemoryCapacityBytes),999) |
当节点内存不足时,QoS为Guaranteed的Pod最后被kill。而BestEffort级别的Pod优先被kill。其次是Burstable,根据计算公式oom_score_adj值范围2到999,设置的request越大,oom_score_adj越低,oom保护程度越高。
3、创建Pod:
文件resources-qos.yaml
apiVersion: v1
kind: Pod
metadata:
name: resources-qos-besteffort
spec:
containers:
- name: resources-qos-besteffort
image: polinux/stress
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "50M", "--vm-hang", "1"]
---
apiVersion: v1
kind: Pod
metadata:
name: resources-qos-burstable
spec:
containers:
- name: resources-qos-burstable
image: polinux/stress
resources:
requests:
cpu: 1
memory: 200Mi
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "50M", "--vm-hang", "1"]
---
apiVersion: v1
kind: Pod
metadata:
name: resources-qos-guaranteed
spec:
containers:
- name: resources-qos-guaranteed
image: polinux/stress
resources:
requests:
cpu: 1
memory: 200Mi
limits:
cpu: 1
memory: 200Mi
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "50M", "--vm-hang", "1"]
kubectl apply -f resources-qos.yaml
4、查看Pod状态:
kubectl get pod -o wide
5、查看Pod oom:
kubectl exec resources-qos-besteffort -- cat /proc/1/oom_score_adj
kubectl exec resources-qos-burstable -- cat /proc/1/oom_score_adj
kubectl exec resources-qos-guaranteed -- cat /proc/1/oom_score_adj
6、删除Pod:
kubectl delete -f resources-qos.yaml
当节点的内存和cpu资源不足,开始驱逐节点上的Pod时。QoS同样会影响驱逐的优先级。顺序如下:
1、kubelet优先驱逐BestEffort的Pod和实际占用资源大于requests的Burstable Pod。
2、接下来驱逐实际占用资源小于request的Burstable Pod。
3、QoS为Guaranteed的Pod最后驱逐,kubelet会保证Guaranteed的Pod不会因为其他Pod的资源消耗而被驱逐。
4、当QoS相同时,kubelet根据Priority计算驱逐的优先级。
1.4 ResourceQuota
Kubernetes提供ResourceQuota对象,用于配置限制namespace内的每种类型的Kubernetes对象数量和资源。
1、创建ResourceQuota:
文件resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: resourcequota
spec:
hard:
requests.cpu: 3
requests.memory: 1Gi
limits.cpu: 5
limits.memory: 2Gi
pods: 5
kubectl apply -f resourcequota.yaml
2、查看ResourceQuota状态:
kubectl get resourcequota
3、查看ResourceQuota详细信息:
kubectl describe resourcequota resourcequota
4、删除ResourceQuota:
kubectl delete -f resourcequota.yaml
1.5 LimitRange
LimitRange是用来设置namespace中Pod的默认的资源request和limit值,以及大小范围。
1、创建LimitRange:
文件limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange
spec:
limits:
- default:
memory: 512Mi
cpu: 2
defaultRequest:
memory: 256Mi
cpu: 0.5
max:
memory: 800Mi
cpu: 3
min:
memory: 100Mi
cpu: 0.3
maxLimitRequestRatio:
memory: 2
cpu: 2
type: Container
kubectl apply -f limitrange.yaml
2、查看LimitRange状态:
kubectl get limitrange
3、查看LimitRange详细信息:
kubectl describe limitrange limitrange
4、删除LimitRange:
kubectl delete -f limitrange.yaml