容器资源需求、资源限制及HeapSter

对容量器来说,控制其资源的方式有两个维度:

  • requests: 需求,最低保障
  • limits: 限制,硬限制

requests 代表节点上必须有这么多资源才能被调度其上,节点的已使用的的资源则是所有容器的 requests 总和,而 limits 则代表最大的限量,通常 requests 小于等于 limits

资源计量单位

CPU:

  • 1 颗逻辑CPU = 1000 个 millicores

MEM:

  • Ei Pi Ti Gi … …

定义资源需求

requests 用来表示运行容器的运行最低资源需求,limits 用来限制容器的资源阈值,这是两个维度的限制

[root@master-0 ~]# kubectl explain pod.spec.containers.resources
KIND:     Pod
VERSION:  v1

RESOURCE: resources <Object>

DESCRIPTION:
     Compute Resources required by this container. Cannot be updated. More info:
     https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

     ResourceRequirements describes the compute resource requirements.

FIELDS:
   limits <map[string]string>
     Limits describes the maximum amount of compute resources allowed. More
     info:
     https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

   requests <map[string]string>
     Requests describes the minimum amount of compute resources required. If
     Requests is omitted for a container, it defaults to Limits if that is
     explicitly specified, otherwise to an implementation-defined value. More
     info:
     https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

Pod 中的每个容器都可以指定以下的一个或者多个值

spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.limits.hugepages-<size>
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
spec.containers[].resources.requests.hugepages-<size>

使用压测容器测试效果

[root@master-0 ~]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
spec:
  containers:
  - name: myapp
    image: ikubernetes/stress-ng
    command: ["/usr/bin/stress-ng","-m 1","-c 1","--metrice-brief"]
    resources:
      requests:
        cpu: "200m"
        memory: "128Mi"
      limits:
        cpu: "500m"
        memory: "256Mi"
[root@master-0 ~]# kubectl apply -f pod.yaml
pod/pod-demo created
  • 在容器内查看资源时,看到的是整个节点的资源,所以当有一些 java 程序启动时是根据当前资源百分比启动的,这里就会存在吞掉的资源与预期资源不匹配的问题
  • kubectl describe nodes 看到的 Resource 是已使用的

QoS Class

对于节点来说,pod 的运行优先顺序一共有三个级别:

  • 最高优先运行的 Guranteed 级别: 需要每个容器都同时设置了 CPU 和 MEMORY 的 requests 和 limits,且 cpu.limits=cpu.requests 、 memory.limits=memory.requests
  • 中等优先运行的 Burstable 级别: 至少有一个容器设置了 CPU 或 MEMORY 设置了 requests 属性
  • 最低优先运行的 BestEffort级别: 没有任何一个容器设置了 requests 或 limits

当内存资源比较紧缺时,会优先 kill BestEffort 级别中的容器以腾出资源确保另外两个级别中的容器运行正常,当没有 BestEffort 时,则 kill Burstable,而 Burstable 中可能有很多 pod 中的容器都属于这个级别,终止的标准为: 尽可能把对应的 requests 资源已经占用量比例较大的依次关闭

HeapSter

运行在节点级别,收集与存储 k8s 集群的节点或 pod 资源使用情况的插件,让 kubectl top 命令可用, k8s 集群默认的指标收集工具为 HeapSter,但它实际上只是一个汇聚指标的工具,真正负责采集数据的 agent 是一款叫做 cAdvisor 的插件,他主要负责单个节点和 pod 级别的资源使用信息,早些版本中还可以在该节点查看,并且监听于 4194 端口但新的 k8s 版本中已经将 cAdvisor 和 kubelet 融合在了一起并且关闭了监听端口,现在 cAdvisor 会主动向 HeapSter 报告数据,HeapSter 只能将数据存储在内存供给 kubectl top 来使用,如果需要持久存储则需要时序数据库 influxDB 并通过 Grafana 来做展示

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-asONNrlc-1624627904402)(https://raw.githubusercontent.com/ma823956028/image/master/picgo/20201005202801.png)]

但从 kubernetes 1.11 开始逐步废弃了 HeapSter 从 1.12 开启彻底废用了 HeapSter

部署

  1. 部署 influxDB 时序数据库,但需要注意要为 influxDB 设置持久存储,但本次实验不展示

    [root@master-0 ~]# cat inf.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: monitoring-influxdb
      namespace: kube-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          task: monitoring
          k8s-app: influxdb
      template:
        metadata:
          labels:
            task: monitoring
            k8s-app: influxdb
        spec:
          containers:
          - name: influxdb
            image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
            volumeMounts:
            - mountPath: /data
              name: influxdb-storage
          volumes:
          - name: influxdb-storage
            emptyDir: {}
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        task: monitoring
        kubernetes.io/cluster-service: 'true'
        kubernetes.io/name: monitoring-influxdb
      name: monitoring-influxdb
      namespace: kube-system
    spec:
      ports:
      - port: 8086
        targetPort: 8086
      selector:
        k8s-app: influxdb
    [root@master-0 ~]# kubectl apply -f inf.yaml
    deployment.apps/monitoring-influxdb created
    service/monitoring-influxdb created
    
  2. 为 HeapSter 设置 rbac

    [root@master-0 ~]# cat rbac.yaml
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: heapster
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:heapster
    subjects:
    - kind: ServiceAccount
      name: heapster
      namespace: kube-system
    [root@master-0 ~]# kubectl apply -f rbac.yaml
    clusterrolebinding.rbac.authorization.k8s.io/heapster created
    
  3. 部署 HeapSter

    [root@master-0 ~]# cat heapster.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: heapster
      namespace: kube-system
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: heapster
      namespace: kube-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          task: monitoring
          k8s-app: heapster
      template:
        metadata:
          labels:
            task: monitoring
            k8s-app: heapster
        spec:
          serviceAccountName: heapster
          containers:
          - name: heapster
            image: k8s.gcr.io/heapster-amd64:v1.5.4
            imagePullPolicy: IfNotPresent
            command:
            - /heapster
            - --source=kubernetes:https://kubernetes.default
            - --sink=gcm
            - --sink=gcl
            - --poll_duration=2m
            - --stats_resolution=1m
            volumeMounts:
            - mountPath: /etc/ssl/certs
              name: ssl-certs
              readOnly: true
          volumes:
          - name: ssl-certs
            hostPath:
              path: /etc/ssl/certs
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        task: monitoring
        kubernetes.io/cluster-service: 'true'
        kubernetes.io/name: Heapster
      name: heapster
      namespace: kube-system
    spec:
      ports:
      - port: 80
        targetPort: 8082
      selector:
        k8s-app: heapster
    [root@master-0 ~]# kubectl apply -f heapster.yaml
    serviceaccount/heapster created
    deployment.apps/heapster created
    service/heapster created
    
  4. 定义 grafana

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: monitoring-grafana
      namespace: kube-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          task: monitoring
          k8s-app: grafana
      template:
        metadata:
          labels:
            task: monitoring
            k8s-app: grafana
        spec:
          containers:
          - name: grafana
            image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4
            ports:
            - containerPort: 3000
              protocol: TCP
            volumeMounts:
            - mountPath: /etc/ssl/certs           # 如果需要正式的生产 ssl 证书则需要修改
              name: ca-certificates
              readOnly: true
            - mountPath: /var
              name: grafana-storage
            env:
            - name: INFLUXDB_HOST
              value: monitoring-influxdb
            - name: GF_SERVER_HTTP_PORT
              value: "3000"
            - name: GF_AUTH_BASIC_ENABLED
              value: "false"
            - name: GF_AUTH_ANONYMOUS_ENABLED
              value: "true"
            - name: GF_AUTH_ANONYMOUS_ORG_ROLE
              value: Admin
            - name: GF_SERVER_ROOT_URL
              value: /
          volumes:
          - name: ca-certificates
            hostPath:
              path: /etc/ssl/certs
          - name: grafana-storage
            emptyDir: {}
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        kubernetes.io/cluster-service: 'true'
        kubernetes.io/name: monitoring-grafana
      name: monitoring-grafana
      namespace: kube-system
    spec:
      ports:
      - port: 80
        targetPort: 3000
      selector:
        k8s-app: grafana
      type: NodePort                            # 提供对外访问
    [root@master-0 ~]# kubectl apply -f grafana.yaml
    deployment.apps/monitoring-grafana created
    service/monitoring-grafana created
    
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值