11.1 为什么生产一定要用ResourceQuota
通过给项目组创建命名空间方式,对命名空间做资源的限制
11.2 资源配额配置解析
ps: 上图中解释不对更正如下
request.cpu:0.5配置是针对该namespace下所有cpu资源总量的限制,并非对pod的限制;如需对资源类型为pod的做限制,则配置到pod内部或者使用limitrange给pod资源配置默认值;
11.3 资源配额使用入门
[root@k8s-master01 10st]# cat ResourcesQuota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-test
labels:
app: resourcequota
spec:
hard:
pods: 2
# requests.cpu: 0.5
# requests.memory: 512Mi
# limits.cpu: 5
# limits.memory: 16Gi
configmaps: 2
# requests.storage: 40Gi
# persistentvolumeclaims: 20
# replicationcontrollers: 20
# secrets: 20
# services: 50
# services.loadbalancers: "2"
# services.nodeports: "10"
[root@k8s-master01 10st]#
修改configmap:3
resourcequotas使用情况查看
测试pod
11.4 生产必备LimitRange
resourcequota是针对整个命名空间做的资源限制,具体来说如果没限制pod数量,只限制内存和cpu16c64g,pod中也没配置resources(默认cpu和mem都是0),那么总的内存和cpu也是0,将会出现可以创建无数个pod的情况。所以此时limitrange应运而生,可以单独限制pod的默认配置,最大配置等;
request和limits是针对pod的资源限制;
1核cpu=1000m,200m=0.2核(c)
11.5 配置默认的requests和limits
[root@k8s-master01 10st]# cat limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-mem-limit-range
spec:
limits:
- default:
cpu: 1
memory: 512Mi
defaultRequest:
cpu: 0.5
memory: 256Mi
type: Container
[root@k8s-master01 10st]#
kubectl create -f limitrange.yaml -n rq-test
默认添加测试
删除deploy下其中一个pod,带新建后查询新pod yaml文件里资源的配置是否为limitrage配置的值
kubectl get po test-1-595f7df994-kh8sl -n rq-test -oyaml
11.6 限制requests和limits范围(最大值和最小值)
限制request和limits的原因是防止有些人员limitrange中配置的defaultRequest或default太低,永远达不到resourcequota的限制,防止无限制创建pod。让limitrange中配置的defaultRequest或default
不会过大或者过小,处于合理范围。
查看pod的yaml文件如下,试想一下,如果其余是limitrange的默认值,但 requests的cpu设置就是1m,这样就远远达不到resourcequota的限制,可以无限制创建pod资源。由此-max和-min横空出世。
limitrange的作用对象是pod,不是deployment;
[root@k8s-master01 10st]# cat limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-mem-limit-range
spec:
limits:
- default:
cpu: 1
memory: 512Mi
defaultRequest:
cpu: 0.5
memory: 256Mi
max:
cpu: "2"
memory: 1Gi
min:
cpu: "10m"
memory: 128Mi
type: Container
[root@k8s-master01 10st]#
kubectl replace -f limitrange.yaml -n rq-test
修改deploy中pod资源配置如下,继续测试-max和-min是否生效。
describe deploy,通过新建的rs查看pod
结果如下
另外需要注意的一点,limitrange作用在我们的pod上,而非deployment上,所以deployment配置是就是啥,不会被更改。
11.7 限制存储空间大小
yaml中-下是同级的,上下顺序可以颠倒,比如下图中type字段,都是列表中数据;
[root@k8s-master01 10st]# cat limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-mem-limit-range
spec:
limits:
- default:
cpu: 1
memory: 512Mi
defaultRequest:
cpu: 0.5
memory: 256Mi
max:
cpu: "2"
memory: 1Gi
min:
cpu: "10m"
memory: 128Mi
type: Container
- type: PersistentVolumeClaim
max:
storage: 2Gi
min:
storage: 1Gi
[root@k8s-master01 10st]#
kubectl replace -f limitrange.yaml -n rq-test
先创建pv
[root@k8s-master01 10st]# cat pv-nfs-1217.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-new
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs-slow
nfs:
path: /data/k8s
server: 192.168.1.110
[root@k8s-master01 10st]#
kubectl create -f pv-nfs-1217.yaml -n rq-test
pvc挂载pv
存储改为5g测试
[root@k8s-master01 10st]# cat pvc-nfs-1217.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-pvc-claim
spec:
storageClassName: nfs-slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
[root@k8s-master01 10st]#
kubectl create -f pvc-nfs-1217.yaml -n rq-test
limitrange最大的存储2g生效;
改为1Mi测试
11.8 生产可用性保障服务质量QoS
11.9 实现QoS为Guaranteed的Pod
[root@k8s-master01 10st]# cat QoS-guaranteed.yaml
apiVersion: v1
kind: Pod
metadata:
name: qos-demo
namespace: qos-example
spec:
containers:
- name: qos-demo-ctr
image: nginx
resources:
limits:
memory: "200Mi"
cpu: "700m"
requests:
memory: "200Mi"
cpu: "700m"
[root@k8s-master01 10st]#
kubectl create -f QoS-guaranteed.yaml
如果limits或request没配置全,此时又配置了limitrange,默认的参数有limitrange填充,此时
qos为Burstable,因为limit大于quests。
11.10 BestEffort和BurStable级别的QoS
创建qos类型为burstable的pod
[root@k8s-master01 10st]# cat QoS-burstable.yaml
apiVersion: v1
kind: Pod
metadata:
name: qos-demo-2
namespace: qos-example
spec:
containers:
- name: qos-demo-2-ctr
image: nginx
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
[root@k8s-master01 10st]#
kubectl create -f QoS-burstable.yaml
创建qos类型为besteffort的pod
[root@k8s-master01 10st]# cat QoS-besteffort.yaml
apiVersion: v1
kind: Pod
metadata:
name: qos-demo-3
namespace: qos-example
spec:
containers:
- name: qos-demo-3-ctr
image: nginx
[root@k8s-master01 10st]# kubectl create -f QoS-besteffort.yaml
pod/qos-demo-3 created
---------------教程来源:51cto 杜宽老师k8s课程的学习笔记 -------------