准入控制
A walk-through of creating a webhook for Kubernetes dynamic admission control.
创建用于Kubernetes动态准入控制的Webhook的演练。
The code illustrated in this article is available for download.
本文中说明的代码可以下载 。
What?
什么?
First, what is a Kubernetes admission controller?
首先,什么是Kubernetes准入控制器?
In a nutshell, Kubernetes admission controllers are plugins that govern and enforce how the cluster is used. They can be thought of as a gatekeeper that intercept (authenticated) API requests and may change the request object or deny the request altogether.
简而言之,Kubernetes接纳控制器是控制和强制使用集群的插件。 可以将它们视为拦截(已认证)API请求的网守,并且可以更改请求对象或完全拒绝请求。
— Kubernetes — A Guide to Kubernetes Admission Controllers
— Kubernetes — Kubernetes准入控制器指南
So, what about dynamic admission control?
那么,动态准入控制又如何呢?
Among the more than 30 admission controllers shipped with Kubernetes, two take a special role because of their nearly limitless flexibility — ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks…they do not implement any policy decision logic themselves. Instead, the respective action is obtained from a REST endpoint (a webhook) of a service running inside the cluster. This approach decouples the admission controller logic from the Kubernetes API server, thus allowing users to implement custom logic to be executed whenever resources are created, updated, or deleted in a Kubernetes cluster.
在Kubernetes随附的30多个准入控制器中,有两个由于其几乎无限的灵活性而起着特殊的作用-ValidatingAdmissionWebhooks和MutatingAdmissionWebhooks……它们本身并未实现任何策略决策逻辑。 而是从群集内运行的服务的REST端点(Webhook)获得相应的操作。 这种方法使准入控制器逻辑与Kubernetes API服务器分离,从而使用户能够实现在Kubernetes集群中创建,更新或删除资源时执行的自定义逻辑。
— Kubernetes — A Guide to Kubernetes Admission Controllers
— Kubernetes — Kubernetes准入控制器指南
Why?
为什么?
I recently wrote an article, Automating Kubernetes Best Practices, about the open source Polaris application.
我最近写了一篇文章, 自动化Kubernetes最佳实践 ,对开源应用北极星。
Fairwinds’ Polaris keeps your clusters sailing smoothly. It runs a variety of checks to ensure that Kubernetes pods and controllers are configured using best practices, helping you avoid problems in the future.
Fairwinds的Polaris可使您的集群顺畅航行。 它会进行各种检查以确保使用最佳实践来配置Kubernetes Pod和控制器,从而帮助您避免将来出现问题。
— Fairwinds — Polaris
—风向— 北极星
Through the process of using Polaris, I was struck that while Kubernetes has a robust authorization mechanism, RBAC, it is fairly coarse grained. For example, a user with create access on a pod resource can create a pod as they wish. At the same time, we know that we need to constrain user from creating pods that pose a security risk.
在使用Polaris的过程中,我感到惊讶的是,尽管Kubernetes具有强大的授权机制RBAC ,但它的粒度还是相当粗糙的。 例如,对pod资源具有创建访问权限的用户可以根据需要创建pod。 同时,我们知道我们需要限制用户创建具有安全风险的Pod。
To illustrate how admission controller webhooks can be leveraged to establish custom security policies, let’s consider an example that addresses one of the shortcomings of Kubernetes: a lot of its defaults are optimized for ease of use and reducing friction, sometimes at the expense of security. One of these settings is that containers are by default allowed to run as root (and, without further configuration and no USER directive in the Dockerfile, will also do so). Even though containers are isolated from the underlying host to a certain extent, running containers as root does increase the risk profile of your deployment — and should be avoided as one of many security best practices. The recently exposed runC vulnerability (CVE-2019–5736), for example, could be exploited only if the container ran as root.
为了说明如何利用准入控制器Webhooks建立自定义安全策略,让我们考虑一个解决Kubernetes缺点之一的示例:优化了许多默认设置以简化易用性并减少摩擦,有时会牺牲安全性。 这些设置之一是默认情况下允许容器以root身份运行(并且,无需进一步配置,并且Dockerfile中也没有USER指令,也可以这样做)。 即使容器在一定程度上与基础主机隔离,但以root身份运行容器确实会增加部署的风险,因此,应避免将其作为许多安全性最佳实践之一。 例如,仅当容器以root用户身份运行时,才能利用最近暴露的runC漏洞(CVE-2019-5736)。
— Kubernetes — A Guide to Kubernetes Admission Controllers
— Kubernetes — Kubernetes准入控制器指南
While Polaris delivers a webhook with a robust set of security checks and even allows one to create custom checks, it is inherently limited to validating pods and containers. What about constraining other resources?
虽然Polaris提供了带有一组强大的安全检查的webhook,甚至允许用户创建自定义检查,但它本质上仅限于验证容器和容器。 约束其他资源呢?
This got me thinking that I needed to learn how to create my own dynamic admission control webhook.
这使我想到我需要学习如何创建自己的动态准入控制网络挂钩。
Prerequisites
先决条件
If you wish to follow along, you will need:
如果您希望遵循,则需要:
- A Kubernetes cluster Kubernetes集群
- kubectl installed locally; configured for the cluster 安装在本地的kubectl; 为集群配置
- Docker installed locally 本地安装的Docker
- Node.js installed locally 本地安装的Node.js
- openssl installed locally; often pre-installed by the OS install 在本地安装的openssl; 通常由操作系统安装预先安装
note: While I tried using both MicroK8s and Minikube local clusters, I ended up writing this article using an Amazon Elastic Kubernetes Service (EKS) cluster because I could not reliably get Telepresence (next section) to work with either of them.
注意 :当我尝试同时使用MicroK8和Minikube本地集群时,由于无法可靠地使Telepresence(下一部分)与它们中的任何一个一起使用,我最终使用Amazon Elastic Kubernetes Service(EKS)集群编写了本文。
Sidebar into Telepresence
边栏进入网真
Because we want to iterate quickly, we install and use Telepresence to develop locally.
因为我们要快速迭代,所以我们安装并使用网真在本地进行开发。
Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to get a shell on a running container and running your tools inside the remote shell.
Kubernetes应用程序通常由多个单独的服务组成,每个服务都在自己的容器中运行。 在远程Kubernetes群集上开发和调试这些服务可能很麻烦,需要您在运行的容器上安装一个Shell,并在远程Shell内运行您的工具。
telepresence is a tool to ease the process of developing and debugging services locally, while proxying the service to a remote Kubernetes cluster. Using telepresence allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
远程呈现是一种工具,可简化在本地开发和调试服务的过程,同时将服务代理到远程Kubernetes群集。 使用网真,您可以将自定义工具(例如调试器和IDE)用于本地服务,并为该服务提供对ConfigMap,密钥和在远程群集上运行的服务的完全访问权限。
— Kubernetes — Developing and debugging services locally
— Kubernetes — 在本地开发和调试服务
Having installed Telepresence, we begin by creating the Express “Hello World” application locally. We create a pod and service to deliver our application with:
安装网真后,我们首先在本地创建Express“ Hello World”应用程序。 我们创建一个pod和服务来交付我们的应用程序,其中包括:
$ telepresence --new-deployment hdac --expose 3000
Indeed, we observe the new pod and service.
实际上,我们观察到了新的吊舱和服务。
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/hdac 1/1 Running 0 29sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hdac ClusterIP 10.100.38.46 <none> 3000/TCP 29s
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 39m
In the shell provided by the telepresence command, we run our application locally and thus deliver it into the cluster.
在网真命令提供的外壳程序中,我们在本地运行应用程序,然后将其交付到集群中。
$ node app.js
In another terminal we can run a temporary pod to access the service within the cluster; available on the DNS name hdac.default.svc:
在另一个终端中,我们可以运行一个临时pod来访问集群中的服务; 在DNS名称hdac.default.svc上可用:
$ kubectl run tmp --image=busybox --restart=Never -it --rm -- wget -O - -q -T 3 http://hdac.default.svc:3000
Hello World!pod "tmp" deleted
note: That is a lot of command line options (WOW).
注意 :这是很多命令行选项(WOW)。
TLS Certificates
TLS证书
We next need to serve our application over HTTPS.
接下来,我们需要通过HTTPS服务我们的应用程序。
Since a webhook must be served via HTTPS, we need proper certificates for the server. These certificates can be self-signed (rather: signed by a self-signed CA), but we need Kubernetes to instruct the respective CA certificate when talking to the webhook server. In addition, the common name (CN) of the certificate must match the server name used by the Kubernetes API server, which for internal services is <service-name>.<namespace>.svc, i.e., webhook-server.webhook-demo.svc in our case.
由于必须通过HTTPS为Webhook提供服务,因此我们需要服务器的适当证书。 这些证书可以是自签名的(而是由自签名的CA签名),但是在与Webhook服务器通信时,我们需要Kubernetes来指导相应的CA证书。 此外,证书的公用名(CN)必须与Kubernetes API服务器使用的服务器名匹配,对于内部服务,该服务器名是<service-name>。<namespace> .svc,即webhook-server.webhook-demo在我们的情况下是.svc。
— Kubernetes — A Guide to Kubernetes Admission Controllers
— Kubernetes — Kubernetes准入控制器指南
The one bit of trickiness is that creating a self-signed TLS certificate is different than creating a TLS certificate from a self-signed CA; the later is what we need to do. I wrote a separate article, Mutual TLS Authentication (mTLS) De-Mystified, that describes the concepts behind the following steps.
棘手的一点是,创建自签名TLS证书与从自签名CA创建TLS证书不同。 稍后是我们需要做的。 我写了另一篇文章“ 相互TLS身份验证(mTLS)揭秘” ,它描述了以下步骤的概念。
Within the application (app) folder, we first create the CA key and certificate:
在application( app )文件夹中,我们首先创建CA密钥和证书:
$ openssl req \
-new \
-x509 \
-nodes \
-days 365 \
-subj '/CN=my-ca' \
-keyout ca.key \
-out ca.crt
We next create the server key:
接下来,我们创建服务器密钥:
$ openssl genrsa \
-out server.key 2048
We create a certificate signing request:
我们创建一个证书签名请求:
$ openssl req \
-new \
-key server.key \
-subj '/CN=hdac.default.svc' \
-out server.csr
We create the server certificate:
我们创建服务器证书:
$ openssl x509 \
-req \
-in server.csr \
-CA ca.crt \
-CAkey ca.key \
-CAcreateserial \
-days 365 \
-out server.crt
The updated application code, serving HTTPS, at this point is:
此时,为HTTPS服务的更新的应用程序代码为:
const express = require('express');
const fs = require('fs');
const https = require('https');
const app = express();
const port = 3000;
const options = {
ca: fs.readFileSync('ca.crt'),
cert: fs.readFileSync('server.crt'),
key: fs.readFileSync('server.key'),
};
app.get('/', (req, res) => {
res.send('Hello World!');
});
const server = https.createServer(options, app);
server.listen(port, () => {
console.log(`Server running on port ${port}/`);
});
With these changes we deliver our application into the cluster:
通过这些更改,我们将应用程序交付到集群中:
$ telepresence --new-deployment hdac --expose 3000
$ node app.js
As before, we can run a temporary pod to access the service available within the cluster on the DNS name hdac.default.svc:
和以前一样,我们可以运行一个临时pod来访问群集中DNS名称为hdac.default.svc的可用服务:
$ kubectl run tmp --image=busybox --restart=Never -it --rm -- wget -O - -q -T 3 https://hdac.default.svc:3000
wget: note: TLS certificate validation not implemented
Hello World!pod "tmp" deleted
note: If one wants to do TLS certificate validation at this point, one can create a separate pod to run the curl command with the cacert option (passing in the ca.crt file we created earlier).
注意 :如果此时要进行TLS证书验证,则可以创建一个单独的pod,以使用cacert选项运行curl命令(传入我们之前创建的ca.crt文件)。
Dynamic Admission Control Webhook Application
动态准入控制Webhook应用程序
Here we adapt our application to be a dynamic administration control webhook; i.e., a simple one that accepts the request and always allows the operation.
在这里,我们将应用程序调整为动态管理控制Webhook。 即,一个简单的接受请求并始终允许操作的请求。
Webhooks are sent a POST request, with Content-Type: application/json, with an AdmissionReview API object in the admission.k8s.io API group serialized to JSON as the body.
向Webhooks发送一个POST请求,其内容类型为:application / json,带有admission.k8s.io API组中的AdmissionReview API对象,该对象序列化为JSON作为主体。
Webhooks respond with a 200 HTTP status code, Content-Type: application/json, and a body containing an AdmissionReview object (in the same version they were sent), with the response stanza populated, serialized to JSON.
Webhooks使用200 HTTP状态代码(内容类型:application / json)和包含AdmissionReview对象(与它们发送的版本相同)的主体进行响应,并填充响应节,并将其序列化为JSON。
— Kubernetes — Dynamic Admission Control
— Kubernetes — 动态准入控制
Per the aforementioned documentation, The request JSON includes the following structure:
根据上述文档,请求JSON包含以下结构:
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"request": {
"uid": "<unique identifier>",
...
}
...
}
The response JSON minimally has the following structure; the value of the allow property dictates whether the operation is allowed.
响应JSON至少具有以下结构; allow属性的值指示是否允许该操作。
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": {
"uid": "<value from request.uid>",
"allowed": true
}
}
One last detail that I missed and cost me many hours of head-scratching.
我错过了最后一个细节,使我花了许多小时的脑力劳动。
Kubernetes does not allow specifying a port in the webhook configuration; it always assumes the HTTPS port 443.
Kubernetes不允许在webhook配置中指定端口。 它始终采用HTTPS端口443。
— Kubernetes — A Guide to Kubernetes Admission Controllers
— Kubernetes — Kubernetes准入控制器指南
With all this in mind, we update the application code:
考虑到所有这些,我们更新应用程序代码:
const bodyParser = require('body-parser');
const express = require('express');
const fs = require('fs');
const https = require('https');
const app = express();
app.use(bodyParser.json());
const port = 443;
const options = {
ca: fs.readFileSync('ca.crt'),
cert: fs.readFileSync('server.crt'),
key: fs.readFileSync('server.key'),
};
app.post('/', (req, res) => {
if (
req.body.request === undefined ||
req.body.request.uid === undefined
) {
res.status(400).send();
return;
}
console.log(req.body); // DEBUGGING
const { request: { uid } } = req.body;
res.send({
apiVersion: 'admission.k8s.io/v1',
kind: 'AdmissionReview',
response: {
uid,
allowed: true,
},
});
});
const server = https.createServer(options, app);
server.listen(port, () => {
console.log(`Server running on port ${port}/`);
});
Points to observe:
注意点:
Here we use the body-parser library (has to be installed) to extract out the JSON post data
在这里,我们使用body-parser库(已安装)提取JSON发布数据
We change the application to run on port 443
我们将应用程序更改为在端口443上运行
We change the method from get to post
我们将方法从获取更改为发布
In the post method, we do basic validation of the request JSON and return a response to allow the operation
在post方法中,我们对请求JSON进行基本验证,并返回响应以允许操作
We deliver our updated application (having to use sudo because of the privileged port use) into the cluster:
我们将更新后的应用程序(由于特权端口的使用,必须使用sudo)交付到集群中:
$ telepresence --new-deployment hdac --expose 443
$ sudo node app.js
We can run a temporary pod to validate the applications behavior:
我们可以运行一个临时pod来验证应用程序的行为:
$ kubectl run tmp \
--image=curlimages/curl \
--restart=Never \
-it \
--rm \
-- \
curl \
--insecure \-H 'Content-Type: application/json' \
--request POST \
--data '{"request": { "uid": "sample" } }' \
https://hdac.default.svc
{"apiVersion":"admission.k8s.io/v1","kind":"AdmissionReview","response":{"uid":"sample","allowed":true}}pod "tmp" deleted
note: Now we are getting serious about the number of command line options (SMILE).
注意 :现在我们开始认真考虑命令行选项(SMILE)的数量。
Points to observe:
注意点:
We switch to using curl; I find it easier to use for complex requests
我们转向使用curl ; 我发现它更容易用于复杂的请求
To avoid TLS certificates errors, we use the insecure option
为避免TLS证书错误,我们使用不安全选项
ValidatingWebhookConfiguration
验证WebhookConfiguration
To enable the webhook application, we need to create a ValidatingWebhookConfiguration resource. But first, we need to create a single line base64 encoded version of the ca.crt file.
要启用webhook应用程序,我们需要创建ValidatingWebhookConfiguration资源。 但是首先,我们需要创建ca.crt文件的单行base64编码版本。
$ cat ca.crt | base64 --wrap=0[OBMITTED]
note: A bit of confusion here is that the ca.crt file already contains a block of base64 encoded data; here we are encoding the entire file (including the already encoded data).
注意 :这里有些混乱的地方是ca.crt文件已经包含一个base64编码的数据块。 在这里,我们正在对整个文件进行编码(包括已经编码的数据)。
We create a configuration file for the ValidatingWebhookConfiguration:
我们为ValidatingWebhookConfiguration创建一个配置文件:
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: "pod-policy.example.com"
webhooks:
- name: "pod-policy.example.com"
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE"]
resources: ["pods"]
scope: "Namespaced"
clientConfig:
service:
namespace: "default"
name: "hdac"
caBundle: "[OBMITTED]"
admissionReviewVersions: ["v1"]
sideEffects: None
timeoutSeconds: 5
Points to observe:
注意点:
The name, here pod-policy.example.com, needs to be globally unique and expressed as a DNS name
名称,这里为pod-policy.example.com ,必须是全局唯一的,并表示为DNS名称
- This configuration only operates on pod creations 此配置仅适用于Pod创建
The caBundle property is the encoded data from above
caBundle属性是上面的编码数据
To keep things simple, our webhook application only supports v1; set in the admissionReviewVersions property
为简单起见,我们的webhook应用程序仅支持v1; 在admissionReviewVersions属性中设置
We create the ValidatingWebhookConfiguration:
我们创建ValidatingWebhookConfiguration:
$ kubectl apply -f admission.yaml
validatingwebhookconfiguration.admissionregistration.k8s.io/pod-policy.example.com created
It is important to observe that we left the application running from the previous section.
重要的是要观察到我们使应用程序从上一节开始运行。
We then create an example pod with the configuration file:
然后,我们使用配置文件创建一个示例pod:
apiVersion: v1
kind: Pod
metadata:
labels:
name: hello-pod
spec:
containers:
- name: ubuntu
image: ubuntu:18.04
command: ['tail', '-f', '/dev/null']
and applying it:
并应用:
$ kubectl apply -f pod.yaml
pod/hello-pod created
Looking back at our application’s console, we see the full structure of the request JSON (or several levels deep of it) because the code had a debugging console.log in it.
回顾应用程序的控制台,我们看到了请求JSON的完整结构(或其中的多个层次),因为代码中包含调试控制台 。
{ kind: 'AdmissionReview',
apiVersion: 'admission.k8s.io/v1',
request:
{ uid: 'eae5fb07-baa5-4ddc-876f-1b3f399a5833',
kind: { group: '', version: 'v1', kind: 'Pod' },
resource: { group: '', version: 'v1', resource: 'pods' },
requestKind: { group: '', version: 'v1', kind: 'Pod' },
requestResource: { group: '', version: 'v1', resource: 'pods' },
name: 'hello-pod',
namespace: 'default',
operation: 'CREATE',
userInfo:
{ username: 'kubernetes-admin',
uid: 'heptio-authenticator-aws:143287522423:AIDASCXE2CR33MJQ4BOAQ',
groups: [Array],
extra: [Object] },
object:
{ kind: 'Pod',
apiVersion: 'v1',
metadata: [Object],
spec: [Object],
status: [Object] },
oldObject: null,
dryRun: false,
options: { kind: 'CreateOptions', apiVersion: 'meta.k8s.io/v1' } } }
While it is outside the scope of this article, validating the operation “simply” amounts to validating the supplied request JSON.
虽然不在本文讨论范围之内,但“仅”验证操作等同于验证所提供的请求JSON。
note: On other projects, I found the Ajv: Another JSON Schema Validator library, to be particularly powerful JSON validation solution.
注意 :在其他项目上,我找到了Ajv:另一个JSON模式验证器 库,以成为功能特别强大的JSON验证解决方案。
Containerization
货柜化
Now that our application is operational, we can go ahead and containerize it. First we make some small changes to the application code:
现在我们的应用程序可以运行了,我们可以继续对其进行容器化了。 首先,我们对应用程序代码进行一些小的更改:
const bodyParser = require('body-parser');
const express = require('express');
const fs = require('fs');
const https = require('https');
const app = express();
app.use(bodyParser.json());
const port = 8443;
const options = {
ca: fs.readFileSync('ca.crt'),
cert: fs.readFileSync('server.crt'),
key: fs.readFileSync('server.key'),
};
app.get('/hc', (req, res) => {
res.send('ok');
});
app.post('/', (req, res) => {
if (
req.body.request === undefined ||
req.body.request.uid === undefined
) {
res.status(400).send();
return;
}
console.log(req.body); // DEBUGGING
const { request: { uid } } = req.body;
res.send({
apiVersion: 'admission.k8s.io/v1',
kind: 'AdmissionReview',
response: {
uid,
allowed: true,
},
});
});
const server = https.createServer(options, app);
server.listen(port, () => {
console.log(`Server running on port ${port}/`);
});
Points to observe:
注意点:
We change the application to run on an unprivileged port; 8443
我们将应用程序更改为在非特权端口上运行; 8443
We add an health check, hc, endpoint
我们添加健康检查, HC,端点
- We also need to change the file permissions of the key files that we generated earlier (container will run as an unprivileged user) to be world readable, e.g., 我们还需要将先前生成的密钥文件(容器将以非特权用户身份运行)的文件权限更改为全球可读,例如,
$ chmod 644 *.key
We create a Dockerfile:
我们创建一个Dockerfile :
FROM node:12.18.2
WORKDIR /usr/src/app
COPY app/package*.json ./
RUN npm install
COPY app .
EXPOSE 8443
USER 1000:1000
CMD [ "npm", "start" ]
We then need to build and push an image to a repository; in my case I used Amazon Elastic Container Repository (ECR). Another obvious choice would be Docker Hub.
然后,我们需要构建映像并将其推送到存储库; 就我而言,我使用了Amazon Elastic Container Repository(ECR) 。 另一个显而易见的选择是Docker Hub 。
Next we need to create the service and pod definitions for our application:
接下来,我们需要为我们的应用程序创建服务和pod定义:
apiVersion: v1
kind: Service
metadata:
name: hdac
spec:
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
run: hdac
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: hdac
name: hdac
spec:
containers:
- image: [OBMITTED]/hdac:0.1.0
name: hdac
ports:
- containerPort: 8443
imagePullPolicy: Always
livenessProbe:
httpGet:
port: 8443
path: /hc
scheme: HTTPS
readinessProbe:
httpGet:
port: 8443
path: /hc
scheme: HTTPS
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
Points to observe:
注意点:
While this configuration is fairly straightforward, the details on it can be found in another article that I wrote; Automating Kubernetes Best Practices
尽管此配置非常简单,但是可以在我撰写的另一篇文章中找到有关它的详细信息。 自动化Kubernetes最佳实践
- Here we simply deployed a pod, instead of the preferred deployment. Did this for simplicity for this article 在这里,我们只是部署了一个Pod,而不是首选的部署。 这样做是为了简化本文
Finally, we deploy the updated application into the cluster:
最后,我们将更新的应用程序部署到集群中:
$ kubectl apply -f controller.yaml
service/hdac created
pod/hdac created
note: In order to apply this configuration, we have to first delete the ValidatingWebhookConfiguration and then re-apply it; sort of a chicken-and-egg problem.
注意 :为了应用此配置,我们必须首先删除ValidatingWebhookConfiguration,然后重新应用它; 有点像鸡和鸡蛋的问题。
Conclusion
结论
While this was a somewhat lengthy process, the actual code is fairly simple and serves as a starting point for creating your own webhook for Kubernetes dynamic admission control.
虽然这是一个冗长的过程,但实际代码相当简单,并且可以作为创建自己的Kubernetes动态准入控制Webhook的起点。
翻译自: https://medium.com/@johntucker_48673/kubernetes-dynamic-admission-control-by-example-d8cc2912027c
准入控制