qtp自动化测试最佳实践_自动化kubernetes最佳实践

qtp自动化测试最佳实践

A first peek at using the open-source Polaris tool to automate some Kubernetes best practices.

首先使用开源北极星工具来自动化一些Kubernetes最佳实践。

Organizing Your Thinking

整理思维

A search for “Kubernetes best practices” turns up a daunting list of articles with guidance and checklists. For example, the Kubernetes website alone provides two: Best Practices and Configuration Best Practices. To make things even more complicated, searching for “Kubernetes security” adds to this list of articles; many of which are also from the Kubernetes website.

搜索“ Kubernetes最佳实践”会发现包含指导和清单的艰巨文章列表。 例如,仅Kubernetes网站提供两个: 最佳实践 配置最佳实践 为了使事情变得更加复杂,搜索“ Kubernetes安全性”将添加到此文章列表中; 其中许多也来自Kubernetes网站。

One pattern, however, does emerge: you can organize your thinking about best practices (including security) in layers: the 4C’s being Cloud, Clusters, Containers, and Code.

但是,确实出现了一种模式:您可以将有关最佳实践(包括安全性)的想法分层组织:4C就是云,集群,容器和代码。

Image for post
Overview of Cloud Native Security Cloud Native Security概述的图片

Polaris

北极星

The Polaris tool is principally focused on best practices (including security) of the Container layer.

Polaris工具主要专注于Container层的最佳做法(包括安全性)。

Fairwinds’ Polaris keeps your clusters sailing smoothly. It runs a variety of checks to ensure that Kubernetes pods and controllers are configured using best practices, helping you avoid problems in the future.

Fairwinds的Polaris可使您的集群顺畅航行。 它会进行各种检查以确保使用最佳实践来配置Kubernetes Pod和控制器,从而帮助您避免将来出现问题。

— Fairwinds — Polaris

—风向— 北极星

The tool has three modes of operation:

该工具具有三种操作模式:

Dashboard: The Polaris dashboard is a way to get a simple visual overview of the current state of your Kubernetes workloads as well as a roadmap for what can be improved.

仪表板:Polaris仪表板是一种获得Kubernetes工作负载当前状态的简单直观概述以及可改进之处的路线图的方法。

Admission Controller: Polaris can be run as an admission controller that acts as a validating webhook. This accepts the same configuration as the dashboard, and can run the same validations. This webhook will reject any workloads that trigger a danger-level check. This is indicative of the greater goal of Polaris, not just to encourage better configuration through dashboard visibility, but to actually enforce it with this webhook. Polaris will not fix your workloads, only block them.

准入控制器:Polaris可以作为准入控制器运行,充当验证网钩。 这接受与仪表板相同的配置,并且可以运行相同的验证。 该Webhook将拒绝触发危险级别检查的所有工作负载。 这表明Polaris的更大目标,不仅是为了通过仪表板可视性鼓励更好的配置,而且实际上是通过此Webhook实施的。 北极星不会修复您的工作负载,只会阻止它们。

Command Line Interface (CLI): Polaris can also be used on the command line, either to audit local files or a running cluster. This is particularly helpful for running Polaris against your infrastructure-as-code as part of a CI/CD pipeline.

命令行界面(CLI):Polaris也可以在命令行上使用,以审计本地文件或正在运行的群集。 这对于在CI / CD管道中针对您的基础结构代码运行Polaris尤其有用。

— Fairwinds — Polaris

—风向— 北极星

While the admission controller and/or CLI modes are what we would use to operationalize best practices, the dashboard mode is useful in performing an initial assessment of the Container layer of a Kubernetes solution.

虽然我们将使用准入控制器和/或CLI模式来实现最佳实践,但仪表板模式对于执行Kubernetes解决方案的Container层的初始评估很有用。

The Dashboard Quickstart is fairly non-intrusive. Installing it principally creates a deployment, service, and service account in a Polaris namespace. Using it involves temporarily forwarding a local (one’s workstation) port to the service; one then uses a browser to access the dashboard.

仪表板快速入门完全是非侵入性的。 安装它主要是在Polaris名称空间中创建部署,服务和服务帐户。 使用它涉及将本地(一个人的工作站)端口临时转发到服务。 然后使用浏览器访问仪表板。

The dashboard performs a series of real-time checks on the Cluster’s Pods (including those managed by higher-order resources, e.g, deployments); checks are reported as being passing, warning, or dangerous. For example:

仪表板对集群的Pod(包括那些由高阶资源(例如,部署)管理的Pod)执行一系列实时检查; 检查报告为通过,警告或危险。 例如:

Image for post

Each check includes a help link to a screen that provides a summary of the check, e.g.:

每个检查都包括一个指向屏幕的帮助链接,该屏幕提供了检查的摘要,例如:

Related to that, relying on cached versions of a Docker image can become a security vulnerability. By default, an image will be pulled if it isn’t already cached on the node attempting to run it. This can result in variations in images that are running per node, or potentially provide a way to gain access to an image without having direct access to the ImagePullSecret. With that in mind, it’s often better to ensure the a pod has pullPolicy: Always specified, so images are always pulled directly from their source.

与此相关的是,依靠Docker映像的缓存版本可能会成为一个安全漏洞。 默认情况下,如果尚未在尝试运行该映像的节点上缓存该映像,则会将其提取。 这可能会导致每个节点运行的图像发生变化,或者可能提供一种无需直接访问ImagePullSecret就可以访问图像的方法。 考虑到这一点,通常最好确保Pod具有pullPolicy:始终指定,因此图像总是直接从其源中拉出。

— Polaris — Images (from Dashboard)

—北极星—图像(来自仪表板)

Below the summary, there are links to supporting articles, e.g., Kubernetes’ AlwaysPullImages Admission Control — the Importance, Implementation, and Security Vulnerability in its Absence.

在摘要下方,有一些支持文章的链接,例如, Kubernetes的AlwaysPullImages准入控制—缺少的重要性,实现和安全漏洞

To illustrate Polaris in action, we walk through addressing failing (warning or dangerous) checks of an example pod.

为了说明Polaris的实际作用,我们逐步解决了一个示例Pod的失败(警告或危险)检查。

First Attempt

第一次尝试

Our first attempt at running our sample Node.js application on Kubernetes begins with creating an Image; we follow the Node.js documentation: Dockerizing a Node.js Web App.

我们首先在Kubernetes上运行示例Node.js应用程序的尝试是从创建映像开始的。 我们遵循Node.js文档: 对Node.js Web App进行Docker化

The application itself is simple; not much to talk about. The Dockerfile is equally simple.

应用程序本身很简单; 没什么可谈的。 Dockerfile同样简单。

FROM node:12.18.2
WORKDIR /usr/src/app
COPY app/package*.json ./
RUN npm install
COPY app .
EXPOSE 8080
CMD [ "npm", "start" ]

Point to observe:

注意点:

  • The non-obvious instructions are those that first copy only the package*.json files and then run npm install. This follows Docker’s best practice of leveraging the build cache

    不明显的说明是那些仅先复制package * .json文件然后运行npm install的说明 。 这遵循了Docker 利用构建缓存的最佳实践

We build the Image and push it to a Repository; for example sckmkny/hello-polaris. Here the Image is tagged with 0.1.0 and latest.

我们构建映像并将其推送到存储库; 例如sckmkny / hello-polaris 。 这里的图片被打上0.1.0最新的

With the Image created, we can run our application on Kubernetes using the following configuration file:

创建映像后,我们可以使用以下配置文件在Kubernetes上运行我们的应用程序:

apiVersion: v1
kind: Service
metadata:
  name: hello-polaris
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: hello-polaris
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: hello-polaris
  name: hello-polaris
spec:
  containers:
  - image: sckmkny/hello-polaris
    name: hello-polaris
    ports:
    - containerPort: 8080

Note: This configuration file is purposefully simple and was scaffolded using a dry run of the kubectl run command, e.g.:

注意 :此配置文件故意简单,并且是使用kubectl run命令的空运行构建的 ,例如:

$ kubectl run hello-polaris --image=sckmkny/hello-polaris --port=8080 --expose=true --dry-run=client -o yaml > hello-polaris.yaml

We observe that our pod has a number of failing checks as reported by the Polaris dashboard.

我们观察到,如Polaris仪表板所报告的那样,我们的Pod有许多未通过的检查。

Image for post

Note: One passing check, Image pull policy is “Always”, is confusing here as we did not specify an image pull policy. Turns out that Kubernetes’ image pull policy defaults are more complicated than I originally thought; assumed that it always defaulted to IfNotPresent. By not providing an image tag (we will address this in the next section) we inadvertently passed this check.

注意 :一项未通过检查, 图像提取策略为“始终” ,在这里令人困惑,因为我们未指定图像提取策略。 事实证明,Kubernetes的图像拉取策略默认设置比我最初想象的要复杂。 假定它始终默认为IfNotPresent 。 通过不提供图像标签(我们将在下一部分中解决),我们无意中通过了此检查。

Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.

图像拉动策略。 永远,永远,如果不存在之一。 如果指定了:latest标签,则默认为Always,否则为IfNotPresent。

— Kubernetes — API Overview (1.18)

— Kubernetes — API概述(1.18)

Image Tag

图片标签

We begin by addressing the failing dangerous Image tag should be specified check.

我们从解决失败的危险图像标签开始应进行检查。

Docker’s latest tag is applied by default to images where a tag hasn’t been specified. Not specifying a specific version of an image can lead to a wide variety of problems. The underlying image could include unexpected breaking changes that break your application whenever the latest image is pulled. Reusing the same tag for multiple versions of an image can lead to different nodes in the same cluster having different versions of an image, even if the tag is identical.

Docker的最新标签默认情况下应用于未指定标签的图像。 不指定图像的特定版本会导致各种各样的问题。 基础映像可能包含意外的重大更改,这些更改会在每次提取最新映像时中断应用程序。 对于图像的多个版本重复使用相同的标签,即使标签相同,也可能导致同一群集中的不同节点具有不同的图像版本。

— Polaris — Images (from Dashboard)

—北极星—图像(来自仪表板)

The fix is to append the image’s specific tag, :0.1.0, to the container’s image property:

解决方法是将图像的特定标签: 0.1.0附加到容器的image属性:

apiVersion: v1
kind: Service
metadata:
  name: hello-polaris
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: hello-polaris
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: hello-polaris
  name: hello-polaris
spec:
  containers:
  - image: sckmkny/hello-polaris:0.1.0
    name: hello-polaris
    ports:
    - containerPort: 8080

Replacing the pod, the Polaris dashboard now reports 0 dangerous checks; in particular, the relevant check is now passing.

更换了吊舱后,Polaris仪表板现在报告0次危险检查; 特别是相关检查正在通过中。

Image for post

But, now that we have specified that an image tag is not :latest, we trigger another failed warning check.

但是,既然我们指定了image标签不是:latest ,我们将触发另一个失败的警告检查。

Image for post

Image Pull Policy

图片拉取政策

The check that we triggered is:

我们触发的检查是:

Related to that, relying on cached versions of a Docker image can become a security vulnerability. By default, an image will be pulled if it isn’t already cached on the node attempting to run it. This can result in variations in images that are running per node, or potentially provide a way to gain access to an image without having direct access to the ImagePullSecret. With that in mind, it’s often better to ensure the a pod has pullPolicy: Always specified, so images are always pulled directly from their source.

与此相关的是,依靠Docker映像的缓存版本可能会成为一个安全漏洞。 默认情况下,如果尚未在尝试运行该映像的节点上缓存该映像,则会将其提取。 这可能会导致每个节点运行的图像发生变化,或者可能提供一种无需直接访问ImagePullSecret就可以访问图像的方法。 考虑到这一点,通常最好确保Pod具有pullPolicy:始终指定,因此图像总是直接从其源中拉出。

— Polaris — Images (from Dashboard)

—北极星—图像(来自仪表板)

Note: The documentation suggests that this particular check is ignored by default; clearly it is not. This has been reported as an issue.

注意 :文档建议默认情况下忽略此特定检查;请参阅 显然不是。 据报道这是一个问题

While we are using a public Image, out of an abundance of caution, and to accommodate Polaris, we explicitly set the image’s imagePullPolicy to Always:

当我们使用公共图像时,出于谨慎考虑,为了容纳Polaris,我们将图像的imagePullPolicy显式设置为Always:

apiVersion: v1
kind: Service
metadata:
  name: hello-polaris
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: hello-polaris
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: hello-polaris
  name: hello-polaris
spec:
  containers:
  - image: sckmkny/hello-polaris:0.1.0
    name: hello-polaris
    ports:
    - containerPort: 8080
    imagePullPolicy: Always

Replacing the pod, the Polaris dashboard now reports this check as passing.

更换吊舱后,Polaris仪表板现在将此检查报告为通过。

Image for post

Probes

探针

We now address the warning checks: Liveness probe should be configured and Readiness probe should be configured.

现在,我们解决警告检查: 应该配置“活动性”探针和“ 就绪性”探针

Readiness and liveness probes can help maintain the health of applications running inside Kubernetes. By default, Kubernetes only knows whether or not a process is running, not if it’s healthy. Properly configured readiness and liveness probes will also be able to ensure the health of an application.

准备就绪和活跃度探测可以帮助维护Kubernetes中运行的应用程序的运行状况。 默认情况下,Kubernetes仅知道进程是否正在运行,而不知道它是否运行正常。 正确配置的就绪和活跃性探针也将能够确保应用程序的运行状况。

— Polaris — Health Checks (from Dashboard)

—北极星—健康检查(来自仪表板)

We remind ourselves what these probes do:

我们提醒自己这些探针的作用:

Many applications running for long periods of time eventually transition to broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations.

许多长时间运行的应用程序最终会转换为损坏状态,并且除非重新启动,否则无法恢复。 Kubernetes提供了活动性探针来检测和纠正这种情况。

— Kubernetes — Configure Liveness, Readiness and Startup Probes

— Kubernetes — 配置活动性,就绪性和启动探针

Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup, or depend on external services after startup. In such cases, you don’t want to kill the application, but you don’t want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations.

有时,应用程序暂时无法为流量提供服务。 例如,应用程序可能需要在启动过程中加载大数据或配置文件,或者在启动后依赖于外部服务。 在这种情况下,您不想杀死该应用程序,但也不想发送它的请求。 Kubernetes提供了准备就绪探针以检测和缓解这些情况。

— Kubernetes — Configure Liveness, Readiness and Startup Probes

— Kubernetes — 配置活动性,就绪性和启动探针

Here we add a livenessProbe and readinessProbe property to the Container:

在这里,我们向容器添加一个livenessProbereadinessProbe属性:

apiVersion: v1
kind: Service
metadata:
  name: hello-polaris
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: hello-polaris
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: hello-polaris
  name: hello-polaris
spec:
  containers:
  - image: sckmkny/hello-polaris:0.1.0
    name: hello-polaris
    ports:
    - containerPort: 8080
    imagePullPolicy: Always
    livenessProbe:
      httpGet:
        port: 8080
        path: /
    readinessProbe:
      httpGet:
        port: 8080
        path: /

Note: While not necessary to pass the checks, it would have been better to use a named port in the probes (less opportunity for typographic errors). I did not think of this until I was later proof-reading the article as I was too “lazy” to go back and fix all the examples.

注意 :虽然没有必要通过检查,但最好在探针中使用命名端口 (较少的印刷错误机会)。 直到后来我对文章进行校对时,我才想到这一点,因为我太“懒惰”了,无法返回并修复所有示例。

Points to observe:

注意点:

  • Because this application has no external dependencies, the readinessProbe in this example is not necessary

    由于此应用程序没有外部依赖关系,因此在此示例中不需要readinessProbe

  • To satisfy Polaris, however, we supply a readinessProbe (same as livenessProbe)

    为了满足北极星的需求,我们提供了readinessProbe (与livenessProbe相同)

Replacing the pod, the Polaris dashboard now reports both checks as passing.

更换吊舱后,Polaris仪表板现在将两项检查均报告为通过。

Image for post

Resources

资源资源

We now address the various CPU / memory requests and limits warnings:

现在,我们解决各种CPU /内存请求和限制警告:

Configuring resource requests and limits for containers running in Kubernetes is an important best practice to follow. Setting appropriate resource requests will ensure that all your applications have sufficient compute resources. Setting appropriate resource limits will ensure that your applications do not consume too many resources.

为在Kubernetes中运行的容器配置资源请求和限制是要遵循的重要最佳实践。 设置适当的资源请求将确保您的所有应用程序具有足够的计算资源。 设置适当的资源限制将确保您的应用程序不会消耗过多的资源。

— Polaris — Resources (from Dashboard)

—北极星—资源(来自仪表板)

A container’s, thus pod’s, resources configuration plays a role in scheduling Pods.

容器(即容器)的资源配置在调度容器中起着作用。

When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node.

创建Pod时,Kubernetes调度程序选择一个要在其上运行Pod的节点。 每个节点对于每种资源类型都有最大容量:它可以为Pod提供的CPU和内存量。 调度程序确保对于每种资源类型,已调度容器的资源请求之和小于节点的容量。

— Kubernetes — Managing Resources for Containers

— Kubernetes — 管理容器资源

It is also used in running them.

它也用于运行它们。

If a Container exceeds its memory limit, it might be terminated. If it is restartable, the kubelet will restart it, as with any other type of runtime failure.

如果容器超出其内存限制,则可能会终止。 如果它是可重新启动的,则kubelet将与其他任何类型的运行时失败一样将其重新启动。

If a Container exceeds its memory request, it is likely that its Pod will be evicted whenever the node runs out of memory.

如果容器超出其内存请求,则当节点内存不足时,很可能将其Pod逐出。

A Container might or might not be allowed to exceed its CPU limit for extended periods of time. However, it will not be killed for excessive CPU usage.

容器可能会或可能不会长时间超过其CPU限制。 但是,它不会因CPU使用率过高而被杀死。

— Kubernetes — Managing Resources for Containers

— Kubernetes — 管理容器资源

To get a sense of the CPU and memory actually used by our pod, we can use the following command (assuming one has the Kubernetes Metrics Server Addon installed).

为了了解我们的Pod实际使用的CPU和内存,我们可以使用以下命令(假设其中一个安装了Kubernetes Metrics Server Addon)。

$ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
hello-polaris 0m 31Mi

With our application using 31Mi of memory, we can give it some room to grow by setting the Memory Request and Limit to 128Mi. Without an actual load, however, it is impossible to determine the appropriate CPU request and limit so we blindly set it to 100m to start.

对于使用31Mi内存的应用程序,我们可以通过将Memory Request and Limit设置为128Mi为其留出一些空间。 但是,如果没有实际负载,就无法确定适当的CPU请求和限制,因此我们盲目地将其设置为100m以启动。

apiVersion: v1
kind: Service
metadata:
  name: hello-polaris
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: hello-polaris
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: hello-polaris
  name: hello-polaris
spec:
  containers:
  - image: sckmkny/hello-polaris:0.1.0
    name: hello-polaris
    ports:
    - containerPort: 8080
    imagePullPolicy: Always
    livenessProbe:
      httpGet:
        port: 8080
        path: /
    readinessProbe:
      httpGet:
        port: 8080
        path: /
    resources:
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi

Replacing the pod, the Polaris dashboard now reports all the resource checks for the pod as passing.

更换吊舱后,Polaris仪表板现在将吊舱的所有资源检查报告为通过。

Image for post

Security

安全

We now address the warning checks: Filesystem should be read-only and should not be allowed to run as root.

现在,我们解决警告检查: 文件系统应该是只读的,并且不应以root身份运行

Securing workloads in Kubernetes is an important part of overall cluster security. The overall goal should be to ensure that containers are running with as minimal privileges as possible. This includes avoiding privilege escalation, not running containers with a root user, and using read only file systems wherever possible.

保护Kubernetes中的工作负载是整个集群安全的重要组成部分。 总体目标应该是确保容器以尽可能小的特权运行。 这包括避免特权升级,不以root用户运行容器以及在可能的情况下使用只读文件系统。

Much of this configuration can be found in the securityContext attribute for both Kubernetes pods and containers. Where configuration is available at both a pod and container level, Polaris validates both.

在Kubernetes容器和容器的securityContext属性中可以找到很多这种配置。 如果在容器和容器级别均可进行配置,则Polaris会同时验证两者。

— Polaris — Security (from Dashboard)

— Polaris —安全性(来自仪表板)

Here we set the appropriate securityContext for the Container.

在这里,我们为容器设置适当的securityContext

apiVersion: v1
kind: Service
metadata:
  name: hello-polaris
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: hello-polaris
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: hello-polaris
  name: hello-polaris
spec:
  containers:
  - image: sckmkny/hello-polaris:0.1.0
    name: hello-polaris
    ports:
    - containerPort: 8080
    imagePullPolicy: Always
    livenessProbe:
      httpGet:
        port: 8080
        path: /
    readinessProbe:
      httpGet:
        port: 8080
        path: /
    resources:
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi
    securityContext:
      runAsUser: 1000
      runAsGroup: 1000
      readOnlyRootFilesystem: true

A point to observe:

需要注意的一点:

  • Here we explicitly run with a non-privileged user; a user with UID and GID of 1000; picked this number as it is the first non-system UID / GID

    在这里,我们明确地与非特权用户一起运行; UID和GID为1000的用户 选择此号码,因为它是第一个非系统UID / GID

Replacing the pod, the Polaris dashboard now reports all checks for the pod as passing.

更换吊舱后,Polaris仪表板现在将吊舱的所有检查报告为通过。

Image for post

Note: While we did not explicitly specify the allowPrivilegeEscalation field to false (another securityContext field), the check Privilege escalation is not allowed check passes. After a bit of digging, I found out that this configuration defaults to false under most conditions.

注意:虽然我们没有明确地将allowPrivilegeEscalation字段指定为false(另一个securityContext字段),但是不允许通过检查特权升级 。 经过一番挖掘,我发现在大多数情况下,此配置默认为false

AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN

AllowPrivilegeEscalation控制一个进程是否可以比其父进程获得更多的特权。 该布尔值直接控制是否在容器进程上设置no_new_privs标志。 在以下情况下,AllowPrivilegeEscalation始终为true:1)以特权身份运行2)具有CAP_SYS_ADMIN

— Kubernetes — Kubernetes API Reference

— Kubernetes — Kubernetes API参考

Security Tweak

安全调整

Looking back at our original Dockerfile, we did not specify a non-root user (and group) to run the container with. With this image, we addressed the failing Polaris check by explicitly setting the user and group in the pod’s container configuration.

回顾我们最初的Dockerfile,我们没有指定非root用户(和组)来运行容器。 使用此图像,我们通过在容器的容器配置中显式设置用户和组来解决北极星检查失败的问题。

This approach, however, requires tight coupling of the image and the container configuration, e.g., the container (and thus pod) will fail if the image does not support running with the specified user and group.

但是,此方法需要将映像与容器配置紧密耦合,例如,如果映像不支持与指定的用户和组一起运行,则容器(因此,容器)将失败。

The solution, decoupling the image and container configuration, is to first specify the user and group in the image itself.

解耦映像和容器配置的解决方案是首先在映像本身中指定用户和组。

If a service can run without privileges, use USER to change to a non-root user.

如果服务可以在没有特权的情况下运行,请使用USER更改为非root用户。

— Docker — Best practices for writing Dockerfiles

— Docker — 编写Dockerfile的最佳实践

The updated Dockerfile that is used to build and push a new Image tag:

更新后的Dockerfile用于构建和推送新的Image标签:

FROM node:12.18.2
WORKDIR /usr/src/app
COPY app/package*.json ./
RUN npm install
COPY app .
EXPOSE 8080
USER 1000:1000
CMD [ "npm", "start" ]

A point to observe:

需要注意的一点:

  • Here we use the UID / GID values instead of the user and group name, i.e., node:node. This is because the related securityContext field (below) only works with IDs

    在这里,我们使用UID / GID值代替用户名和组名,即node:node 。 这是因为相关的securityContext字段(下面)仅适用于ID

With this new image, we update the container image to an updated tag and replace the runAsUser and runAsGroup fields with the runAsNonRoot field in the securityContext.

有了这个新的形象,我们要更新容器图像的更新标签,并在SecurityContext中runAsNonRoot场更换runAsUserrunAsGroup领域。

apiVersion: v1
kind: Service
metadata:
  name: hello-polaris
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: hello-polaris
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: hello-polaris
  name: hello-polaris
spec:
  containers:
  - image: sckmkny/hello-polaris:0.1.1
    name: hello-polaris
    ports:
    - containerPort: 8080
    imagePullPolicy: Always
    livenessProbe:
      httpGet:
        port: 8080
        path: /
    readinessProbe:
      httpGet:
        port: 8080
        path: /
    resources:
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi
    securityContext:
      runAsNonRoot: true
      readOnlyRootFilesystem: true

Replacing the pod, the Polaris dashboard continues to report all checks for the pod as passing.

更换吊舱后,Polaris仪表板会继续将吊舱的所有检查报告为通过。

结论 (Conclusion)

Having read through much of the Kubernetes’ documentation (and other best practice documentation), it does appear that Polaris does a solid job of ensuring that one abides by best practices in the container layer of a Kubernetes solution. Also, given that Polaris is open-source, one can always propose improvements.

仔细阅读了Kubernetes的许多文档(以及其他最佳实践文档)后,Polaris的确表现出色,可以确保人们遵守Kubernetes解决方案的容器层中的最佳实践。 同样,鉴于Polaris是开源的,人们总是可以提出改进建议。

翻译自: https://codeburst.io/automating-kubernetes-best-practices-7a8276ff7b08

qtp自动化测试最佳实践

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值