K8s之配置校验---Polaris(北极星)使用文档

Polaris是Kubernetes集群的健康检查工具,提供Dashboard、Webhook和CLI三种方式来评估和优化工作负载配置。本文档详细介绍了Polaris的安装、使用,包括自定义配置规则和豁免机制,旨在确保集群的安全、效率和可靠性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

K8s之配置校验—Polaris(北极星)使用文档

简介

Best Practices for Kubernetes Workload Configuration.

​ —Polaris by Firewinds

Polaris是一款通过分析部署配置,从而发现集群中存在的问题的健康检查组件。当然,Polaris的目标可不仅仅只是发现问题,同时也提供避免问题的解决方案,确保集群处于健康状态。 Polaris 有三种不同的实现方式:

  • Dashboard - 以图表的形式查看当前Kubernetes workloads的工作状态和优化点。
  • Webhook - 阻止在集群中安装不符合标准的应用
  • CLI - 检查本地的yaml文件,可结合CI/CD使用

Dashboard

Dashboard是polaris提供的可视化工具,可以查看Kubernetes workloads状态的概览以及优化点。也可以按类别、名称空间和工作负载查看。

安装

polaris支持kubectl, helm and local binary三种安装方式,这里使用Kubectl方式

官方安装示范:https://polaris.docs.fairwinds.com/dashboard/#installation

step1:镜像准备
[root@k8s-master ~]# docker pull quay.io/fairwinds/polaris:4.0
4.0: Pulling from fairwinds/polaris
540db60ca938: Pull complete 
09c1a43ef494: Pull complete 
15a6f35230e5: Pull complete 
62a16ff79a3e: Pull complete 
6f8c08425b62: Pull complete 
Digest: sha256:3e1e28742bb56c521f58db8ef4bfd056387aad095bb46ce20d1803ddd457db3a
Status: Downloaded newer image for quay.io/fairwinds/polaris:4.0
quay.io/fairwinds/polaris:4.0

[root@k8s-master ~]# docker tag 272dc6061cf0 harbor.liboer.top/library/polaris:v4.0
[root@k8s-master ~]# docker push harbor.liboer.top/library/polaris:v4.0
step2:yaml资源配置清单

记得把镜像地址改为自己的私有镜像仓库地址

image: ‘quay.io/fairwinds/polaris:4.0’

dashboard.yaml

---
# Source: polaris/templates/0-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: polaris
---
# Source: polaris/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: polaris
  namespace: polaris
  labels:
    app: polaris
---
# Source: polaris/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: polaris
  labels:
    app: polaris
rules:
  # required by controller-runtime code doing a cluster wide lookup
  # when it seems namespace would suffice
  - apiGroups:
      - ''
    resources:
      - 'nodes'
    verbs:
      - 'get'
      - 'list'
  - apiGroups: 
      - 'monitoring.coreos.com'
    resources: 
      - 'prometheuses'
      - 'alertmanagers'
    verbs: 
      - 'get'
      - 'list'
---
# Source: polaris/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: polaris-view
  labels:
    app: polaris
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
  - kind: ServiceAccount
    name: polaris
    namespace: polaris
---
# Source: polaris/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: polaris
  labels:
    app: polaris
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: polaris
subjects:
  - kind: ServiceAccount
    name: polaris
    namespace: polaris
---
# Source: polaris/templates/dashboard.service.yaml
apiVersion: v1
kind: Service
metadata:
  name: polaris-dashboard
  namespace: polaris
  labels:
    app: polaris
  annotations:
spec:
  type: NodePort
  ports:
  - name: http-dashboard
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 32765
  selector:
    app: polaris
    component: dashboard
---
# Source: polaris/templates/dashboard.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: polaris-dashboard
  namespace: polaris
  labels:
    app: polaris
    component: dashboard
spec:
  replicas: 1
  selector:
    matchLabels:
      app: polaris
      component: dashboard
  template:
    metadata:
      labels:
        app: polaris
        component: dashboard
    spec:
      containers:
      - command:
        - polaris
        - dashboard
        - --port
        - "8080"
        - --config
        - "/e/polaris-dashboard/custom-config.yaml"
        image: 'quay.io/fairwinds/polaris:4.0'
        imagePullPolicy: 'Always'
        name: dashboard
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 20
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 20
        resources:
          limits:
            cpu: 150m
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 128Mi
        securityContext:
          allowPrivilegeEscalation: false
          privileged: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          capabilities:
            drop:
              - ALL
        volumeMounts:
        - name: custom-config
          mountPath: /etc/polaris-dashboard
      serviceAccountName: polaris
      nodeSelector:
      tolerations:
      volumes:
      - name: custom-config
        hostPath:
           path: /opt/polaris-cli
step3:创建
[root@k8s-master ~]# kubectl apply -f http://mirrors.liboer.top/polaris/dashboard.yaml
namespace/polaris created
serviceaccount/polaris created
clusterrole.rbac.authorization.k8s.io/polaris created
clusterrolebinding.rbac.authorization.k8s.io/polaris-view created
clusterrolebinding.rbac.authorization.k8s.io/polaris created
service/polaris-dashboard created
deployment.apps/polaris-dashboard created
[root@k8s-master ~]# kubectl get pod,svc -n polaris
NAME                                   READY   STATUS    RESTARTS   AGE
pod/polaris-dashboard-5cd95648-9hdjf   1/1     Running   0          17m

NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/polaris-dashboard   ClusterIP   10.1.155.101   <none>        80/TCP    17m

服务暴露

现在只能在集群内部访问,如果想在外部使用,需要暴露端口到外边

[root@k8s-master ~]# kubectl get svc,pod -n polaris -o wide
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE     SELECTOR
service/polaris-dashboard   ClusterIP   10.1.155.101   <none>        80/TCP    4h26m   app=polaris,component=dashboard

NAME                                   READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
pod/polaris-dashboard-5cd95648-9hdjf   1/1     Running   0          4h26m   172.12.1.5   k8s-node02   <none>           <none>
[root@k8s-master ~]# kubectl edit svc polaris-dashboard -n polaris
# type: NodePort
# nodePort=32765
service/polaris-dashboard edited
[root@k8s-master ~]# kubectl get svc,pod -n polaris -o wide
NAME                        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/polaris-dashboard   NodePort   10.1.155.101   <none>        80:32765/TCP   4h30m   app=polaris,component=dashboard

NAME                                   READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
pod/polaris-dashboard-5cd95648-9hdjf   1/1     Running   0          4h30m   172.12.1.5   k8s-node02   <none>           <none>

现在浏览器访问:http://k8s-master:32765/

image-20210708155829166

使用

通过维护一个config.yaml文件来实现对验证规则的自定义,这里只简单介绍dashboard的图形化界面。在后边的自定义Polaris验证规则部分将详细说明。

Polaris仪表板是一种获取Kubernetes工作负载当前状态的简单可视概述的方法,以及可以改进的路线图。仪表板提供集群范围的概述以及按类别、命名空间和工作负载划分的结果。

健康状况总览

polaris检查的严重等级分为errorwarningignore ,polaris不会检查ignore等级的配置项。

image-20210709140423189

查看某个名称空间下的资源检查状况

可以查看所有名称空间下资源健康状况

image-20210708160913034

查看某一名称空间的某一资源的详细清单

详细列出yaml资源清单中各项配置的危险系数

对号:pass 叹号:warning 叉号:dangerous

image-20210709140743115

结果评估展示

效率评估、可信度评估、安全评估

image-20210709141052318

**我们的默认标准在Polaris是相当高的,所以不要惊讶,如果你的分数低于你的预期。**Polaris的一个关键目标是设定一个高标准,并在默认情况下实现出色的配置。如果我们包含的默认值太严格,那么很容易将配置作为部署配置的一部分进行调整,以更好地适应您的工作负载。

Webhook

Polaris可以作为一个admission controller运行,作为一个validating webhook。它接受与仪表板相同的配置,并可以运行相同的验证。这个webhook将拒绝任何触发验证错误的workloads 。这表明了Polaris更大的目标,不仅仅是通过仪表板的可见性来鼓励更好的配置,而且通过这个webhook来实际执行它。Polaris不会修复workloads,只会阻止他们。

  • 使用和dashboard相同的配置
  • 阻止所有部署配置不通过的应用安装到集群
  • 不仅仅能够查看集群当前存在的缺陷,还能预防缺陷

安装

在集群中安装Webhook组件后,将会阻止不符合标准的应用部署在集群中。

官方示范:https://polaris.docs.fairwinds.com/admission-controller/#installation

先决条件

Polaris验证Webhook需要有效的TLS证书。如果您的集群中安装了cert manager,那么下面的安装方法可以工作。

如果不使用cert manager,则需要:

  • 提供一个带有webhook.caBundle的CA包
  • 使用该CA的有效证书在群集中创建TLS secret
  • 使用webhook.secretName参数传递该secret的名称

安装cert manager,它会自动为k8s的service签发ssl证书。算是Webhook的依赖吧。

kubectl create namespace cert-manager
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.13.0/cert-manager.yaml
step1:镜像准备

和dashboard镜像一致前边做过后这里可以不做

[root@k8s-master ~]# docker pull quay.io/fairwinds/polaris:4.0
4.0: Pulling from fairwinds/polaris
540db60ca938: Pull complete 
09c1a43ef494: Pull complete 
15a6f35230e5: Pull complete 
62a16ff79a3e: Pull complete 
6f8c08425b62: Pull complete 
Digest: sha256:3e1e28742bb56c521f58db8ef4bfd056387aad095bb46ce20d1803ddd457db3a
Status: Downloaded newer image for quay.io/fairwinds/polaris:4.0
quay.io/fairwinds/polaris:4.0

[root@k8s-master ~]# docker tag 272dc6061cf0 harbor.liboer.top/library/polaris:v4.0
[root@k8s-master ~]# docker push harbor.liboer.top/library/polaris:v4.0
step2:yaml资源配置清单

自己私有镜像的话修改镜像地址

webhook.yaml

---
# Source: polaris/templates/0-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: polaris
---
# Source: polaris/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: polaris
  namespace: polaris
  labels:
    app: polaris
---
# Source: polaris/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: polaris
  labels:
    app: polaris
rules:
  # required by controller-runtime code doing a cluster wide lookup
  # when it seems namespace would suffice
  - apiGroups:
      - ''
    resources:
      - 'nodes'
    verbs:
      - 'get'
      - 'list'
  - apiGroups: 
      - 'monitoring.coreos.com'
    resources: 
      - 'prometheuses'
      - 'alertmanagers'
    verbs: 
      - 'get'
      - 'list'
---
# Source: polaris/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: polaris-view
  labels:
    app: polaris
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
  - kind: ServiceAccount
    name: polaris
    namespace: polaris
---
# Source: polaris/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: polaris
  labels:
    app: polaris
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: polaris
subjects:
  - kind: ServiceAccount
    name: polaris
    namespace: polaris
---
# Source: polaris/templates/webhook.service.yaml
apiVersion: v1
kind: Service
metadata:
  name: polaris-webhook
  namespace: polaris
  labels:
    app: polaris
spec:
  ports:
  - name: webhook
    port: 443
    protocol: TCP
    targetPort: 9876
  selector:
    app: polaris
    component: webhook
  type: ClusterIP
---
# Source: polaris/templates/webhook.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: polaris-webhook
  namespace: polaris
  labels:
    app: polaris
    component: webhook
spec:
  replicas: 1
  selector:
    matchLabels:
      app: polaris
      component: webhook
  template:
    metadata:
      labels:
        app: polaris
        component: webhook
    spec:
      containers:
        - name: webhook
          command:
            - polaris
            - webhook
          image: 'quay.io/fairwinds/polaris:4.0'
          imagePullPolicy: 'Always'
          ports:
            - containerPort: 9876
          # These are fairly useless readiness/liveness probes for now
          # Follow this issue for potential improvements:
          # https://github.com/kubernetes-sigs/controller-runtime/issues/356
          livenessProbe:
            exec:
              command:
                - sh
                - -c
                - ps -ef | grep polaris
            initialDelaySeconds: 5
            periodSeconds: 5
          readinessProbe:
            exec:
              command:
                - sh
                - -c
                - ps -ef | grep polaris
            initialDelaySeconds: 5
            periodSeconds: 5
          resources:
            limits:
              cpu: 100m
              memory: 128Mi
            requests:
              cpu: 100m
              memory: 128Mi
          securityContext:
            allowPrivilegeEscalation: false
            privileged: false
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            capabilities:
              drop:
                - ALL
          volumeMounts:
            - name: secret
              mountPath: /opt/cert/
              readOnly: true
            - name: cr-logs
              mountPath: /tmp/
              readOnly: false
      serviceAccountName:  polaris
      nodeSelector:
      tolerations:
      volumes:
        - name: secret
          secret:
            secretName: polaris
        - name: cr-logs
          emptyDir: {}
---
# Source: polaris/templates/webhook.cert.yaml
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: polaris-cert
  namespace: polaris
  labels:
    app: polaris
spec:
  commonName: polaris-webhook.polaris.svc
  dnsNames:
  - polaris-webhook.polaris.svc
  - polaris-webhook.polaris
  - polaris-webhook
  - polaris-webhook.polaris.svc.
  issuerRef:
    kind: Issuer
    name: polaris-selfsigned
  secretName: polaris
---
# Source: polaris/templates/webhook.cert.yaml
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
  name: polaris-selfsigned
  namespace: polaris
spec:
  selfSigned: {}
---
# Source: polaris/templates/webhook.configuration.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  name: polaris-webhook
  annotations:
    cert-manager.io/inject-ca-from: polaris/polaris-cert
webhooks:
- admissionReviewVersions:
  - v1beta1
  clientConfig:
    service:
      name: polaris-webhook
      namespace: polaris
      path: /validate
      port: 443
  failurePolicy: Fail
  matchPolicy: Exact
  name: polaris.fairwinds.com
  namespaceSelector:
    
    matchExpressions:
    - key: control-plane
      operator: DoesNotExist
  objectSelector:
    
    {}
  rules:
  - apiGroups:
    - apps
    apiVersions:
    - v1
    - v1beta1
    - v1beta2
    operations:
    - CREATE
    - UPDATE
    resources:
    - daemonsets
    - deployments
    - statefulsets
    scope: Namespaced
  - apiGroups:
    - batch
    apiVersions:
    - v1
    - v1beta1
    operations:
    - CREATE
    - UPDATE
    resources:
    - jobs
    - cronjobs
    scope: Namespaced
  - apiGroups:
    - ""
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - pods
    - replicationcontrollers
    scope: Namespaced
  sideEffects: None
  timeoutSeconds: 10
step3:创建
[root@k8s-master ~]# kubectl apply -f http://mirrors.liboer.top/polaris/webhook.yaml
namespace/polaris unchanged
serviceaccount/polaris unchanged
clusterrole.rbac.authorization.k8s.io/polaris unchanged
clusterrolebinding.rbac.authorization.k8s.io/polaris-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/polaris unchanged
service/polaris-webhook created
deployment.apps/polaris-webhook created
certificate.cert-manager.io/polaris-cert created
issuer.cert-manager.io/polaris-selfsigned created
validatingwebhookconfiguration.admissionregistration.k8s.io/polaris-webhook created


[root@k8s-master ~]# kubectl get all -n polaris
NAME                                   READY   STATUS    RESTARTS   AGE
pod/polaris-dashboard-5cd95648-vwcs5   1/1     Running   0          43m
pod/polaris-webhook-696479d6fd-86v6g   1/1     Running   0          3m15s


NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/polaris-dashboard   NodePort    10.1.102.81   <none>        80:32765/TCP   42m
service/polaris-webhook     ClusterIP   10.1.204.40   <none>        443/TCP        3m16s


NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/polaris-dashboard   1/1     1            1           43m
deployment.apps/polaris-webhook     1/1     1            1           3m16s

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/polaris-dashboard-5cd95648   1         1         1       43m
replicaset.apps/polaris-webhook-696479d6fd   1         1         1       3m16s

使用

webhook内置了对一些已知控制器类型的支持,例如部署、作业和守护程序。要添加新的控制器类型,可以在Helm Chart中设置webhook.rules。和dashbord相差无几,后边统一介绍使用config.yaml配置文件来实现检查规则的开启、关闭以及自定义。

CLI

Polaris可以在命令行上用于审核存储在YAML文件中的本地Kubernetes清单。这对于将Polaris作为CI/CD管道的一部分在您的基础结构上作为代码运行特别有帮助。如果您的Polaris分数低于某个阈值,或者出现任何危险级别问题,请使用可用的命令行标志导致CI/CD失败。

  • 检查本地文件或正在运行的集群
  • 可以结合CI/CD,部署配置校验不通过时直接让CI/CD失败

cli也可以启动dashboard,前边的dashboard和webhook的安装不是必要的

安装

二进制包

[root@k8s-master src]# cd /opt/src
[root@k8s-master src]# wget https://github.com/FairwindsOps/polaris/releases/download/4.0.4/polaris_4.0.4_linux_amd64.tar.gz
[root@k8s-master src]# tar xf polaris_4.0.4_linux_amd64.tar.gz -C /opt
[root@k8s-master etc]# cd /opt/polaris-cli
[root@k8s-master opt]# ll
total 37540
-rw-r--r-- 1 3434 3434    11346 Jun 26 06:06 LICENSE
-rwxr-xr-x 1 3434 3434 38424576 Jun 26 06:07 polaris
-rw-r--r-- 1 3434 3434     3476 Jun 26 06:06 README.md
[root@k8s-master polaris-cli]# ./polaris 
ERRO[0000] You must specify a sub-command.              
Validation of best practices in your Kubernetes clusters.

Usage:
  polaris [flags]
  polaris [command]

Available Commands:
  audit       Runs a one-time audit.
  dashboard   Runs the webserver for Polaris dashboard.
  help        Help about any command
  version     Prints the current version.
  webhook     Runs the webhook webserver.

Flags:
  -c, --config string         Location of Polaris configuration file.
      --disallow-exemptions   Disallow any exemptions from configuration file.
  -h, --help                  help for polaris
      --kubeconfig string     Paths to a kubeconfig. Only required if out-of-cluster.
      --log-level string      Logrus log level. (default "info")

Use "polaris [command] --help" for more information about a command.

使用

audit审计

检查一个本地yaml文件的健康状况,以文本的形式输出

# audit用法
[root@k8s-master polaris-cli]# ./polaris audit --help
Runs a one-time audit.

Usage:
  polaris audit [flags]

Flags:
      --audit-path string               If specified, audits one or more YAML files instead of a cluster.
      --color                           Whether to use color in pretty format. (default true)
      --display-name string             An optional identifier for the audit.
  -f, --format string                   Output format for results - json, yaml, pretty, or score. (default "json")
      --helm-chart string               Will fill out Helm template
      --helm-values string              Optional flag to add helm values
  -h, --help                            help for audit
      --only-show-failed-tests          If specified, audit output will only show failed tests.
      --output-file string              Destination file for audit results.
      --output-url string               Destination URL to send audit results.
      --resource string                 Audit a specific resource, in the format namespace/kind/version/name, e.g. nginx-ingress/Deployment.apps/v1/default-backend.
      --set-exit-code-below-score int   Set an exit code of 4 when the score is below this threshold (1-100).
      --set-exit-code-on-danger         Set an exit code of 3 when the audit contains danger-level issues.

Global Flags:
  -c, --config string         Location of Polaris configuration file.
      --disallow-exemptions   Disallow any exemptions from configuration file.
      --kubeconfig string     Paths to a kubeconfig. Only required if out-of-cluster.
      --log-level string      Logrus log level. (default "info")

# 什么都不跟 默认使用默认的config.yaml检查整个集群的
# [root@k8s-master polaris-cli]# ./polaris audit

# 创建一个yaml以供审查
[root@k8s-master polaris-cli]# vi nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata: 
    name: nginx-ds
spec:
    template:
      metadata:
        labels:
          app: nginx-ds
      spec:
        containers:
        - name: my-nginx
          image: harbor.od.com/public/nginx:v1.7.9
          ports: 
          - containerPort: 80

# 审查本地yaml文件
[root@k8s-master polaris-cli]# ./polaris audit --audit-path /opt/polaris-cli/nginx-ds.yaml --format=pretty


Polaris audited Path /opt/polaris-cli/nginx-ds.yaml at 2021-07-13T13:38:47+08:00
    Nodes: 0 | Namespaces: 0 | Controllers: 1
    Final score: 53

DaemonSet nginx-ds in namespace 
    hostIPCSet                           🎉 Success
        Security - Host IPC is not configured
    hostNetworkSet                       🎉 Success
        Security - Host network is not configured
    hostPIDSet                           🎉 Success
        Security - Host PID is not configured
  Container my-nginx
    cpuRequestsMissing                   😬 Warning
        Efficiency - CPU requests should be set
    runAsRootAllowed                     😬 Warning
        Security - Should not be allowed to run as root
    insecureCapabilities                 😬 Warning
        Security - Container should not have insecure capabilities
    pullPolicyNotAlways                  😬 Warning
        Reliability - Image pull policy should be "Always"
    privilegeEscalationAllowed           ❌ Danger
        Security - Privilege escalation should not be allowed
    readinessProbeMissing                😬 Warning
        Reliability - Readiness probe should be configured
    cpuLimitsMissing                     😬 Warning
        Efficiency - CPU limits should be set
    notReadOnlyRootFilesystem            😬 Warning
        Security - Filesystem should be read only
    livenessProbeMissing                 😬 Warning
        Reliability - Liveness probe should be configured
    memoryLimitsMissing                  😬 Warning
        Efficiency - Memory limits should be set
    memoryRequestsMissing                😬 Warning
        Efficiency - Memory requests should be set
    runAsPrivileged                      🎉 Success
        Security - Not running as privileged
    tagNotSpecified                      🎉 Success
        Reliability - Image tag is specified
    dangerousCapabilities                🎉 Success
        Security - Container does not have any dangerous capabilities
    hostPortSet                          🎉 Success
        Security - Host port is not configured

可以看到前边是一些文件信息,在results中是结果,其中每一个配置都有一个Severity(严重程度),值分别为pass(通过)、waning(警告)、danger(危险),还有category可信度等级,等等 。最后有个综合打分,其实这些就是dashborad中显示的数据,只不过dashboard可视化了。

audit一些参数解释,详见 --help

--audit-path string          string为yaml文件路径
--only-show-failed-tests     只输出waning和danger信息
--color                      加颜色(默认true)
--output-file string         输出到一个文件中,string文件路径
...
...
# 只输出失败信息(danger和warning),输出有多少条danger,然而也没有告诉你具体是那些danger,整个输出的内容仍然是所有信息,即使不加--only-show-failed-tests  要检查的yaml文件目录,多个yaml只能写目录,不能写多个路径或多个--audit-path参数,如果写了不会报错,以最后一个为准
[root@k8s-master polaris-cli]# ./polaris audit --only-show-failed-tests --set-exit-code-on-danger --audit-path /etc/kubernetes/manifests
...
...
    privilegeEscalationAllowed           ❌ Danger
        Security - Privilege escalation should not be allowed
    pullPolicyNotAlways                  😬 Warning
        Reliability - Image pull policy should be "Always"

INFO[0000] 4 danger items found in audit  


# score>=60  但是如果不满足也会输出,只是在最后加上一句话INFO[0000] Audit score of 57 is less than the provided minimum of 60 ,满足则不会有
[root@k8s-master polaris-cli]# ./polaris audit --only-show-failed-tests --set-exit-code-below-score 60 --audit-path /etc/kubernetes/manifests --format=pretty
...
...
        Security - Container should not have insecure capabilities
    privilegeEscalationAllowed           ❌ Danger
        Security - Privilege escalation should not be allowed


INFO[0000] Audit score of 57 is less than the provided minimum of 60 

# 这样的yaml使用audit仍然照常输出而不报语法错误
“iVersion: extensions/v1beta1
kind: DaemonSet
metadata:
    name: nginx-ds
spec:
    template:
      metadatasss":
        labels:
          app: nginx-ds
      spec:
        containers:
        - name: my-nginx
          image: harbor.od.com/public/nginx:v1.7.9
          ports:
          - containerPort: 80
[root@k8s-master polaris-cli]# ./polaris audit --audit-path /opt/polaris-cli/nginx-ds.yaml --format=pretty --only-show-failed-tests


Polaris audited Path /opt/polaris-cli/nginx-ds.yaml at 2021-07-13T13:44:15+08:00
    Nodes: 0 | Namespaces: 0 | Controllers: 1
    Final score: 53

DaemonSet nginx-ds in namespace 
  Container my-nginx
    livenessProbeMissing                 😬 Warning
        Reliability - Liveness probe should be configured
    cpuLimitsMissing                     😬 Warning
        Efficiency - CPU limits should be set
    pullPolicyNotAlways                  😬 Warning
        Reliability - Image pull policy should be "Always"
    cpuRequestsMissing                   😬 Warning
        Efficiency - CPU requests should be set
    insecureCapabilities                 😬 Warning
        Security - Container should not have insecure capabilities
    memoryLimitsMissing                  😬 Warning
        Efficiency - Memory limits should be set
    notReadOnlyRootFilesystem            😬 Warning
        Security - Filesystem should be read only
    memoryRequestsMissing                😬 Warning
        Efficiency - Memory requests should be set
    runAsRootAllowed                     😬 Warning
        Security - Should not be allowed to run as root
    privilegeEscalationAllowed           ❌ Danger
        Security - Privilege escalation should not be allowed
    readinessProbeMissing                😬 Warning
        Reliability - Readiness probe should be configured

  
# 少写一个冒号会有报错,但也不是yaml的语法检查,他是先把yaml转成json然后对json做校验,在yaml转json的过程中报的错
aiVersion: extensions/v1beta1
kind: DaemonSet
metadata: 
    name: nginx-ds
spec:
    template:
      metadatasss:
        labels:
          app: nginx-ds
      spec:
        containers:
        - name: my-nginx
          image: harbor.od.com/public/nginx:v1.7.9
          ports
          - containerPort: 80 
[root@k8s-master polaris-cli]# ./polaris audit --only-show-failed-tests --set-exit-code-below-score 50 --audit-path /opt/polaris-cli/nginx-ds.yaml
ERRO[0000] Invalid YAML: aiVersion: extensions/v1beta1
kind: DaemonSet
metadata: 
    name: nginx-ds
spec:
    template:
      metadatasss:
        labels:
          app: nginx-ds
      spec:
        containers:
        - name: my-nginx
          image: harbor.od.com/public/nginx:v1.7.9
          ports
          - containerPort: 80 
ERRO[0000] Error parsing YAML: (error converting YAML to JSON: yaml: line 15: could not find expected ':') 
ERRO[0000] Error fetching Kubernetes resources error converting YAML to JSON: yaml: line 15: could not find expected ':' 
dashboard显示

检查一个本地yaml文件的健康状况,以可视化的形式展示

[root@k8s-master polaris-cli]# ./polaris dashboard --help
Runs the webserver for Polaris dashboard.

Usage:
  polaris dashboard [flags]

Flags:
      --audit-path string          If specified, audits one or more YAML files instead of a cluster.
      --base-path string           Path on which the dashboard is served. (default "/")
      --display-name string        An optional identifier for the audit.
  -h, --help                       help for dashboard
      --listening-address string   Listening Address for the dashboard webserver.
      --load-audit-file string     Runs the dashboard with data saved from a past audit.
  -p, --port int                   Port for the dashboard webserver. (default 8080)

Global Flags:
  -c, --config string         Location of Polaris configuration file.
      --disallow-exemptions   Disallow any exemptions from configuration file.
      --kubeconfig string     Paths to a kubeconfig. Only required if out-of-cluster.
      --log-level string      Logrus log level. (default "info")
# 使用 --port指定端口
[root@k8s-master polaris-cli]# ./polaris dashboard --audit-path /opt/polaris-cli/nginx-ds.yaml --port 65213
INFO[0000] Starting Polaris dashboard server on port 65213 

在浏览器访问http://k8s-master:65213/

image-20210709145033464

image-20210709150319904

其他参数在–help中有详细说明

其他

同理尝试运行即可

# 查看帮助信息
help
      Prints help, if you give it a command then it will print help for that command. Same as -h

# 查看版本      
version
      Prints the version of Polaris

# 运行webhook服务
webhook
      Runs the webhook webserver

官网参数解析:https://polaris.docs.fairwinds.com/cli/

自定义Polaris验证规则(Customization)

polaris检查的严重等级分为errorwarningignore ,polaris不会检查ignore等级的配置项。 polaris支持的检查类型有:Health ChecksImagesNetworkingResourcesSecurity

配置文件Configuration

您可以自定义配置以执行以下操作:

  • 打开和关闭检查
  • 更改检查的严重级别
  • 添加新的自定义检查
  • 为特定工作负载或名称空间添加豁免

要传入自定义配置,请按照环境的说明进行操作:

  • CLI—将–config参数设置为指向config.yaml
  • Helm—在values文件中设置config变量
  • kubectl—使用config.yaml创建ConfigMap,将其装载为卷,并在Deploment中使用–config参数

默认的config.yaml

reliability、efficiency、 security和exemptions:相关参数解析在最后一部分有详细说明

checks:
  # reliability
  multipleReplicasForDeployment: ignore
  priorityClassNotSet: ignore
  tagNotSpecified: danger
  pullPolicyNotAlways: warning  # 如果拉取镜像不是always则发出warning
  readinessProbeMissing: warning  # 是否缺少就绪探针
  livenessProbeMissing: warning  # 是否缺少存活探针
  metadataAndNameMismatched: ignore
  pdbDisruptionsIsZero: warning
  missingPodDisruptionBudget: ignore
 
  # efficiency
  cpuRequestsMissing: warning
  cpuLimitsMissing: warning
  memoryRequestsMissing: warning
  memoryLimitsMissing: warning
  
  # security
  hostIPCSet: danger
  hostPIDSet: danger
  notReadOnlyRootFilesystem: warning
  privilegeEscalationAllowed: danger
  runAsRootAllowed: warning
  runAsPrivileged: danger
  dangerousCapabilities: danger
  insecureCapabilities: warning
  hostNetworkSet: warning
  hostPortSet: warning
  tlsSettingsMissing: warning
  
  #custom
  customImage: warning

# 豁免
exemptions:
  - namespace: kube-system  # 在此名称空间中
    controllerNames:  # 的这些controller
      - kube-apiserver
      - kube-proxy
      - kube-scheduler
      - etcd-manager-events
      - kube-controller-manager
      - kube-dns
      - etcd-manager-main
    rules:  # 豁免如下规则
      - hostPortSet
      - hostNetworkSet
      - readinessProbeMissing
      - livenessProbeMissing
      - cpuRequestsMissing
      - cpuLimitsMissing
      - memoryRequestsMissing
      - memoryLimitsMissing
      - runAsRootAllowed
      - runAsPrivileged
      - notReadOnlyRootFilesystem
      - hostPIDSet

  - controllerNames:
      - kube-flannel-ds
    rules:
      - notReadOnlyRootFilesystem
      - runAsRootAllowed
      - notReadOnlyRootFilesystem
      - readinessProbeMissing
      - livenessProbeMissing
      - cpuLimitsMissing

  - controllerNames:
      - cert-manager
    rules:
      - notReadOnlyRootFilesystem
      - runAsRootAllowed
      - readinessProbeMissing
      - livenessProbeMissing

  - controllerNames:
      - cluster-autoscaler
    rules:
      - notReadOnlyRootFilesystem
      - runAsRootAllowed
      - readinessProbeMissing

  - controllerNames:
      - vpa
    rules:
      - runAsRootAllowed
      - readinessProbeMissing
      - livenessProbeMissing
      - notReadOnlyRootFilesystem

  - controllerNames:
      - datadog
    rules:
      - runAsRootAllowed
      - readinessProbeMissing
      - livenessProbeMissing
      - notReadOnlyRootFilesystem

  - controllerNames:
      - nginx-ingress-controller
    rules:
      - privilegeEscalationAllowed
      - insecureCapabilities
      - runAsRootAllowed

  - controllerNames:
      - dns-controller
      - datadog-datadog
      - kube-flannel-ds
      - kube2iam
      - aws-iam-authenticator
      - datadog
      - kube2iam
    rules:
      - hostNetworkSet

  - controllerNames:
      - aws-iam-authenticator
      - aws-cluster-autoscaler
      - kube-state-metrics
      - dns-controller
      - external-dns
      - dnsmasq
      - autoscaler
      - kubernetes-dashboard
      - install-cni
      - kube2iam
    rules:
      - readinessProbeMissing
      - livenessProbeMissing

  - controllerNames:
      - aws-iam-authenticator
      - nginx-ingress-default-backend
      - aws-cluster-autoscaler
      - kube-state-metrics
      - dns-controller
      - external-dns
      - kubedns
      - dnsmasq
      - autoscaler
      - tiller
      - kube2iam
    rules:
      - runAsRootAllowed

  - controllerNames:
      - aws-iam-authenticator
      - nginx-ingress-controller
      - nginx-ingress-default-backend
      - aws-cluster-autoscaler
      - kube-state-metrics
      - dns-controller
      - external-dns
      - kubedns
      - dnsmasq
      - autoscaler
      - tiller
      - kube2iam
    rules:
      - notReadOnlyRootFilesystem

  - controllerNames:
      - cert-manager
      - dns-controller
      - kubedns
      - dnsmasq
      - autoscaler
      - insights-agent-goldilocks-vpa-install
      - datadog
    rules:
      - cpuRequestsMissing
      - cpuLimitsMissing
      - memoryRequestsMissing
      - memoryLimitsMissing

  - controllerNames:
      - kube2iam
      - kube-flannel-ds
    rules:
      - runAsPrivileged

  - controllerNames:
      - kube-hunter
    rules:
      - hostPIDSet

  - controllerNames:
      - polaris
      - kube-hunter
      - goldilocks
      - insights-agent-goldilocks-vpa-install
    rules:
      - notReadOnlyRootFilesystem

  - controllerNames:
      - insights-agent-goldilocks-controller
    rules:
      - livenessProbeMissing
      - readinessProbeMissing

  - controllerNames:
      - insights-agent-goldilocks-vpa-install
      - kube-hunter
    rules:
      - runAsRootAllowed

# 自定义检查,默认没有,可自己手动添加
customChecks:
  customImage:
    successMessage: Image comes from allowed registries
    failureMessage: Image should not be from disallowed registry
    category: Security
    target: Container
    schema:
      '$schema': http://json-schema.org/draft-07/schema
      type: object
      properties:
        image:
          type: string
          not:
            pattern: ^quay.io

使用CLI命令行方式加载自定义config.yaml

检查镜像来源

我们可以自定义规则检查镜像来源,当镜像来自http://quay.io抛出警告,比如:

custom-config.yaml

checks:
  imageRegistry: warning  # imageRegistry 检查ID  名字随便起

customChecks:
  imageRegistry:  # 检查ID与上边的对应
    # 成功信息
    successMessage: Image comes from allowed registries
    # 失败信息
    failureMessage: Image should not be from disallowed registry
    category: Security  # one of `Security`, `Efficiency`, or `Reliability`
    target: Container  # 指定要检查的资源类型
    schema:
      '$schema': http://json-schema.org/draft-07/schema
      type: object  # json对象  或者 boolean
      properties:
        # 对image属性
        image:
          type: string  # image键对应的值的类型
          not:
            # 匹配模式
            pattern: ^quay.io  # 正则表达式

所有的自定义检查都应该在您的Polaris配置中的customChecks字段下,由检查ID键入。注意,您还必须在Polaris配置的checks部分设置其严重性severity 。

  • successMessage - the message to show when the check succeeds
  • failureMessage - the message to show when the check fails
  • category - one of Security, Efficiency, or Reliability
  • target - specifies the type of resource to check. This can be:
    • a group and kind, e.g. apps/Deployment or networking.k8s.io/Ingress
    • Controller, to check any resource that contains a pod spec (e.g. Deployments, CronJobs, StatefulSets), as well as naked Pods
    • Pod, same as Controller, but the schema applies to the Pod spec rather than the top-level controller
    • Container same as Controller, but the schema applies to all Container specs rather than the top-level controller
  • controllers - if target is Controller, Pod or Container, you can use this to change which types of controllers are checked
  • controllers.include - only check these controllers
  • controllers.exclude - check all controllers except these
  • containers - if target is Container, you can use this to decide if initContainers, containers, or both should be checked
  • containers.exclude - can be set to a list including initContainer or container
  • schema - the JSON Schema to check against, as a YAML object
  • schemaString- this JSON Schema to check against, as a YAML or JSON string. See Templating below
    • Note: only one of schema and schemaString can be specified.
  • additionalSchemas - see Multi-Resource Checks below
  • additionalSchemaStrings- see Multi-Resource Checks below
    • Note: only one of additionalSchemas and additionalSchemaStrings can be specified.

执行命令

# 使用dashboard可以在浏览器中查看
# [root@k8s-master polaris-cli]# ./polaris dashboard --audit-path /opt/polaris-cli/nginx-ds.yaml --config /opt/polaris-cli/custom-config.yaml

# 使用审计文本显示
[root@k8s-master polaris-cli]# ./polaris audit --audit-path /opt/polaris-cli/nginx-ds.yaml --config /opt/polaris-cli/custom-config.yaml --format=pretty --only-show-failed-tests


Polaris audited Path /opt/polaris-cli/nginx-ds.yaml at 2021-07-13T13:52:14+08:00
    Nodes: 0 | Namespaces: 0 | Controllers: 1
    Final score: 57

DaemonSet nginx-ds in namespace 
  Container my-nginx
    insecureCapabilities                 😬 Warning
        Security - Container should not have insecure capabilities
    notReadOnlyRootFilesystem            😬 Warning
        Security - Filesystem should be read only
    cpuRequestsMissing                   😬 Warning
        Efficiency - CPU requests should be set
    memoryLimitsMissing                  😬 Warning
        Efficiency - Memory limits should be set
    cpuLimitsMissing                     😬 Warning
        Efficiency - CPU limits should be set
    livenessProbeMissing                 😬 Warning
        Reliability - Liveness probe should be configured
    privilegeEscalationAllowed           ❌ Danger
        Security - Privilege escalation should not be allowed
    memoryRequestsMissing                😬 Warning
        Efficiency - Memory requests should be set
    pullPolicyNotAlways                  😬 Warning
        Reliability - Image pull policy should be "Always"
    runAsRootAllowed                     😬 Warning
        Security - Should not be allowed to run as root
    readinessProbeMissing                😬 Warning
        Reliability - Readiness probe should be configured


# nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata: 
    name: nginx-ds
spec:
    template:
      metadata:
        labels:
          app: nginx-ds
      spec:
        containers:
        - name: my-nginx
          image: harbor.od.com/public/nginx:v1.7.9
          ports: 
          - containerPort: 80

从我们的yaml文件中可以看到得分57,我们使用的镜像harbor.od.com/public/nginx:v1.7.9,审计通过。

我们换一个来自quay.io的镜像:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata: 
    name: nginx-ds
spec:
    template:
      metadata:
        labels:
          app: nginx-ds
      spec:
        containers:
        - name: my-nginx
          image: quay.io/public/nginx:v1.7.9
          ports: 
          - containerPort: 80
          
          
[root@k8s-master polaris-cli]# ./polaris audit --audit-path /opt/polaris-cli/quay-io.yaml --config /opt/polaris-cli/custom-config.yaml --format=pretty --only-show-failed-tests


Polaris audited Path /opt/polaris-cli/quay-io.yaml at 2021-07-13T13:53:21+08:00
    Nodes: 0 | Namespaces: 0 | Controllers: 1
    Final score: 51

DaemonSet nginx-ds in namespace 
  Container my-nginx
    readinessProbeMissing                😬 Warning
        Reliability - Readiness probe should be configured
    pullPolicyNotAlways                  😬 Warning
        Reliability - Image pull policy should be "Always"
    cpuLimitsMissing                     😬 Warning
        Efficiency - CPU limits should be set
    cpuRequestsMissing                   😬 Warning
        Efficiency - CPU requests should be set
    memoryLimitsMissing                  😬 Warning
        Efficiency - Memory limits should be set
    notReadOnlyRootFilesystem            😬 Warning
        Security - Filesystem should be read only
    customImage                          😬 Warning
        Security - Image should not be from disallowed registry
    livenessProbeMissing                 😬 Warning
        Reliability - Liveness probe should be configured
    privilegeEscalationAllowed           ❌ Danger
        Security - Privilege escalation should not be allowed
    runAsRootAllowed                     😬 Warning
        Security - Should not be allowed to run as root
    insecureCapabilities                 😬 Warning
        Security - Container should not have insecure capabilities
    memoryRequestsMissing                😬 Warning
        Efficiency - Memory requests should be set

此时会看到检测结果为:Image should not be from disallowed registry,得分为:51

dashboard方式显示:

image-20210709170926144

检查内存、cpu等资源配置

我们使用resourcemimum和resourceMaximum字段扩展JSON模式,以帮助比较内存和CPU资源字符串,如1000m和1G。下面是一个检查内存和CPU是否在某个范围内的示例。

customChecks:
  resourceLimits:
    containers:
      exclude:
      - initContainer
    successMessage: Resource limits are within the required range
    failureMessage: Resource limits should be within the required range
    category: Resources
    target: Container
    schema:
      '$schema': http://json-schema.org/draft-07/schema
      type: object
      required:
      - resources
      properties:
        resources:
          type: object
          required:
          - limits
          properties:
            limits:
              type: object
              required:
              - memory
              - cpu
              properties:
                memory:
                  type: string
                  resourceMinimum: 100M
                  resourceMaximum: 6G
                cpu:
                  type: string
                  resourceMinimum: 100m
                  resourceMaximum: "2"
配置集群检查白名单

还可以配置集群中的检查白名单,比如跳过检查dns-controller是否设置hostNetwork

exemptions:
  - controllerNames:
      - dns-controller
    rules:
      - hostNetworkSet 
其他检查文件

如果您想创建自己的检查,可以使用JSON模式。这也是如何定义内置Polaris检查的-您可以在checks文件夹中看到所有内置检查的示例。

如果你写了一个检查,可能对其他人有用,请随时打开一个PR(pull requests)加入!

Exemptions(豁免)

有时候,workload 确实需要做Polaris认为不安全的事情。在这些情况下,我们可以添加豁免,以允许工作负载通过Polaris检查。

可以通过几种不同的方式添加豁免:

  • Namespace: By editing the Polaris config.
  • Controller: By annotating a controller, or editing the Polaris config.
  • Container: By editing the Polaris config.
Annotations

要通过Annotations免除控制器的所有检查,请使用Annotationspolaris.fairwinds.com/exempt=true

kubectl annotate deployment my-deployment polaris.fairwinds.com/exempt=true

要通过Annotations免除控制器的特定检查,请使用以下形式的注释polaris.fairwinds.com/<check>-exempt=true

kubectl annotate deployment my-deployment polaris.fairwinds.com/cpuRequestsMissing-exempt=true
Config

要通过配置添加豁免,必须至少指定以下一个或多个:

  • 命名空间
  • 控制器名称列表
  • 容器名称列表

还可以指定特定规则的列表。如果没有指定任何规则,则每个规则都被豁免。控制器名称和容器名称作为前缀匹配,因此空字符串将分别匹配每个控制器或容器。

例如:

exemptions:

  # 豁免对默认命名空间中所有控制器中所有容器上的所有规则有效
  - namespace: default
  
  # 对于kube-system命名空间中dns-controller中所有容器上的hostNetworkSet规则,豁免有效
  - namespace: kube-system
    controllerNames:  #  any resource that contains a pod spec (e.g. Deployments, CronJobs, StatefulSets), as well as naked Pods
      - dns-controller
    rules:
      - hostNetworkSet
      
  # 豁免对所有命名空间中dns-controller中所有容器上的hostNetworkSet规则有效
  - controllerNames:
      - dns-controller
    rules:
      - hostNetworkSet
      
  # 豁免对kube-system命名空间中所有控制器的coredns容器上的hostNetworkSet规则有效
  - namespace: kube-system
  - containerNames:
      - coredns
    rules:
      - hostNetworkSet

三种类型检查相关参数解释

相关配置在默认的config.yaml文件中的前半部分有例子

Security

官方解析:https://polaris.docs.fairwinds.com/checks/security/

这些检查与安全问题有关。未通过这些检查的工作负载可能会使集群更易受攻击,通常会引入权限提升路径。

keydefaultdescription
hostIPCSetdangerFails when hostIPC attribute is configured.
hostPIDSetdangerFails when hostPID attribute is configured.
notReadOnlyRootFilesystemwarningFails when securityContext.readOnlyRootFilesystem is not true.
privilegeEscalationAlloweddangerFails when securityContext.allowPrivilegeEscalation is true.
runAsRootAllowedwarningFails when securityContext.runAsNonRoot is not true.
runAsPrivilegeddangerFails when securityContext.privileged is true.
insecureCapabilitieswarningFails when securityContext.capabilities includes one of the capabilities listed here(opens new window)
dangerousCapabilitiesdangerFails when securityContext.capabilities includes one of the capabilities listed here(opens new window)
hostNetworkSetwarningFails when hostNetwork attribute is configured.
hostPortSetwarningFails when hostPort attribute is configured.
tlsSettingsMissingwarningFails when an Ingress lacks TLS settings.

Efficiency

官方解析:https://polaris.docs.fairwinds.com/checks/efficiency/

这些检查确保配置了CPU和内存设置,以便Kubernetes可以有效地安排工作负载。

keydefaultdescription
cpuRequestsMissingwarningFails when resources.requests.cpu attribute is not configured.
memoryRequestsMissingwarningFails when resources.requests.memory attribute is not configured.
cpuLimitsMissingwarningFails when resources.limits.cpu attribute is not configured.
memoryLimitsMissingwarningFails when resources.limits.memory attribute is not configured.

Reliability

官方解析:https://polaris.docs.fairwinds.com/checks/reliability/

这些检查有助于确保您的工作负载始终可用,并且运行正确的映像。

keydefaultdescription
readinessProbeMissingwarningFails when a readiness probe is not configured for a pod.
livenessProbeMissingwarningFails when a liveness probe is not configured for a pod.
tagNotSpecifieddangerFails when an image tag is either not specified or latest.
pullPolicyNotAlwayswarningFails when an image pull policy is not always.
priorityClassNotSetignoreFails when a priorityClassName is not set for a pod.
multipleReplicasForDeploymentignoreFails when there is only one replica for a deployment.
missingPodDisruptionBudgetignore
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

大聪明Smart

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值