齿轮的啮合频率_啮合或不啮合

齿轮的啮合频率

服务网格 (Service Mesh)

I was told that a Service Mesh such as Linkerd, Consul or Istio, adds a lot of overload in my cluster. Keeping this in mind, a Service Mesh is not suitable to a small deployment. Instead, you should consider a Service Mesh when you client is big enough to deserve it.

有人告诉我,Linkerd,Consul或Istio之类的Service Mesh在群集中增加了很多过载。 请记住,服务网格不适合小型部署。 相反,当客户端足够大时应考虑使用服务网格。

But, how big a client must be to deserve a Service Mesh?

但是,客户必须有多大才能拥有服务网格?

And more important, how much overload a Service Mesh adds to my cluster?

更重要的是,服务网格会给集群增加多少重载?

The answer is: I don’t know.

答案是:我不知道。

Because of this, I’m starting this POC, to answer this question.

因此,我正在启动此POC,以回答此问题。

资源资源 (Resources)

Files here: https://gitlab.com/post-repos/to-mesh-or-not-to-mesh

此处的文件: https : //gitlab.com/post-repos/to-mesh-or-not-to-mesh

要求 (Requirements)

To run this test you will need:

要运行此测试,您将需要:

  • a k8s cluster (we will use GCP)

    一个k8s集群(我们将使用GCP)
  • kubectl

    Kubectl
  • locust

    刺槐
  • docker (or any container engine)

    泊坞窗(或任何容器引擎)
  • git

    吉特

We will use Linkerd, so you will need to download the CLI.

我们将使用Linkerd,因此您需要下载CLI。

BFF (BFF)

A simple Python APP that exposes a simple API and hits BACKEND. Once it get the BACKEND‘s response, it enrich this response and sends it to the client.

一个简单的Python APP,它公开了一个简单的API并命中了BACKEND 。 一旦获得BACKEND的响应,它将丰富该响应并将其发送给客户端。

后端 (BACKEND)

The BACKEND just answers the request with its version number.

BACKEND只是用版本号回答请求。

服务网格 (Service Mesh)

I will use linker for this test.

我将使用链接器进行此测试。

我们将衡量什么? (What we will measure?)

We will measure two items:

我们将测量两个项目:

  • WEB response times with Locust

    蝗虫的 WEB响应时间

  • K8s resources usage

    K8s资源使用情况

Then compare these metrics on two scenarios:

然后在两种情况下比较这些指标:

  • using the Service Mesh

    使用服务网格
  • using the raw k8s cluster.

    使用原始的 k8s集群。

搭建环境 (Set up the environment)

(Cluster)

This terraform template can be used: https://gitlab.com/templates14/terraform-templates/-/tree/master/gke

可以使用以下terraform模板: https : //gitlab.com/templates14/terraform-templates/-/tree/master/gke

Then login into your cluster, e.g. for this case:

然后登录到您的集群,例如:

gcloud container clusters get-credentials kungfoo-test --region us-central1 --project speedy-league-274700

测试 (The tests)

We’ll have two tests: one with and one without the service mesh.

我们将进行两个测试:一个带有服务网格,另一个不带有服务网格。

无服务网格 (No Service Mesh)

Set env

设置环境

Create the namespace to deploy the app into:

创建名称空间以将应用程序部署到:

kubectl create ns kungfootest

Set the app and run tests

设置应用并运行测试

Go to Set up the app and the to Run the tests. After this come back here.

转到设置应用程序 ,然后运行测试 。 之后,回到这里。

Clean up

清理

Delete deployments:

删除部署:

kubectl delete -n kungfootest -f deploy-backend.yaml -f deploy-bff.yaml -f ingress.yaml

Delete the namespace so we’re clean:

删除名称空间,这样我们就很干净了:

kubectl delete ns kungfootest

服务网格 (Service Mesh)

Linkerd

Linkerd

First, cli must be installed in your system. (more here )

首先,必须在系统中安装cli。 (更多在这里 )

Download binary from here and add it to your PATH.

此处下载二进制文件并将其添加到您的PATH中。

Since we will be using GKE, we need to run these extra steps: https://linkerd.io/2/reference/cluster-configuration/#private-clusters

由于我们将使用GKE,因此我们需要运行以下额外步骤: https : //linkerd.io/2/reference/cluster-configuration/#private-clusters

Check cluster is ready for linkerd:

检查集群是否已准备好进行链接器:

linkerd check --pre

I got:

我有:

pre-kubernetes-capability-------------------------!! has NET_ADMIN capabilityfound 1 PodSecurityPolicies, but none provide NET_ADMIN, proxy injection will fail if the PSP admission controller is runningsee https://linkerd.io/checks/#pre-k8s-cluster-net-admin for hints!! has NET_RAW capabilityfound 1 PodSecurityPolicies, but none provide NET_RAW, proxy injection will fail if the PSP admission controller is runningsee https://linkerd.io/checks/#pre-k8s-cluster-net-raw for hints

The cluster lacks these capabilities. But probably when Linkerd is installed these will be installed as well. (https://github.com/linkerd/linkerd2/issues/3494)

群集缺乏这些功能。 但是可能在安装Linkerd时也将安装它们。 ( https://github.com/linkerd/linkerd2/issues/3494 )

Then install it:

然后安装:

linkerd install | kubectl apply -f -

…and wait until it’s installed:

…并等待其安装:

linkerd check

Set env

设置环境

Create the namespace to deploy the app into, this time we’ll need an annotation for linkerd:

创建名称空间以将应用程序部署到其中,这一次,我们需要为linkerd添加注释:

kubectl create ns kungfootestkubectl edit ns kungfootest

and then add the annotation:

然后添加注释:

  annotations:    linkerd.io/inject: enabled

This will allow Linkerd to automagically inject the proxy in namespace’s pods.

这将使Linkerd可以自动将代理注入名称空间的pod中。

Set the app and run tests

设置应用并运行测试

Now, go to Set up the app and the to Run the tests. After this come back here.

现在,转到设置应用程序 ,然后运行测试 。 之后,回到这里。

Note this time the pods will have two containers, since Linkerd is injecting the proxy.

请注意,由于Linkerd正在注入代理,因此这次Pod将具有两个容器。

比较测试结果 (Compare the test resuts)

For my tests:

对于我的测试:

无网格 (No Mesh)

Total average requests: 33% CPU, 8% memory.

平均总请求数:33%的CPU,8%的内存。

Total average usage: 12% CPU, 26% memory.

平均总使用量:12%的CPU,26%的内存。

Avg response time: 204ms

平均响应时间:204ms

网眼 (Mesh)

Total average requests: 35% CPU, 9% memory.

平均总请求数:35%的CPU,9%的内存。

Total average usage: 25% CPU, 38% memory.

平均总使用量:25%的CPU,38%的内存。

Avg response time: 206ms

平均响应时间:206ms

结论 (Conclusion)

The mesh configuration we used is very basic, but it adds interesting services to our deploy with no need to modify code. (e.g. secure internal connections, metrics…)

我们使用的网状配置非常基础,但是无需修改代码即可为我们的部署添加有趣的服务。 (例如安全的内部连接,指标...)

From the client’s point of view the time was only 1% more in the meshed version.

从客户的角度来看,网状版本的时间仅增加1%。

From the server side, we’re using 100% more of CPU and 46% of memory.

从服务器端来看,我们正在使用100%以上的CPU和46%的内存。

Does it worth?

值得吗?

As usual, it will depend. Can you afford the CPU and memory usage increase? Then you can have all the service mesh pros at almost no cost on the client side. Anyway, it deserves more tests if you are thinking on it.

像往常一样,这取决于。 您负担得起CPU和内存使用量的增加吗? 然后,您几乎可以在客户端免费获得所有服务网格专家。 无论如何,如果您正在考虑它,它应该进行更多测试。

But let me read your opinions on this, drop here a message.

但是,请允许我阅读您对此的意见,在此留言。

设置应用 (Set up the app)

Under source directory there are two subdirs. One for the BFF and one for the BACKEND (inside the later you will have two more dirs, versions 1 and 2… a.k.a. stable and canary, for now we will use just the stable version).

source目录下有两个子目录。 一个用于BFF ,一个用于BACKEND (在后面的版本中,您将再有两个dir ,版本1和2…又名稳定版和金丝雀,目前我们仅使用稳定版)。

编译应用 (Build the app)

Backend

后端

On both cases you must proceed the same way, varying only the version number.

在这两种情况下,您都必须以相同的方式进行,仅更改版本号。

Into source/backend directory you will see the Dockerfile and the two version directories.

source/backend目录中,您将看到Dockerfile和两个版本目录。

CD into your source/backend directory and run:

CD进入您的source/backend目录并运行:

cd source/backend/1.0 && \GOOS=linux GOARCH=amd64 go build -tags netgo -o app && \docker build -t backendapp:1.0 . && \cd ..

…and:

…和:

cd source/backend/2.0 && \GOOS=linux GOARCH=amd64 go build -tags netgo -o app && \docker build -t backendapp:2.0 . && \cd ..

Bff

ff

Cd into sources/bff directory and run:

将CD放入sources/bff目录并运行:

docker build -t bffapp .

Push them all

全部推开

Ok, now you have the images… push them all to a reposiroty of your choice and keep their names so we can set them into the k8s manifiests.

好的,现在您有了图像…将它们全部推送到您选择的存储库中,并保留其名称,以便我们将其设置为k8s样式表。

Or use these already built images:

或使用这些已构建的图像:

  • docker.io/juanmatias/canary-app:1.0

    docker.io/juanmatias/canary-app:1.0

  • docker.io/juanmatias/canary-app:2.0

    docker.io/juanmatias/canary-app:2.0

  • docker.io/juanmatias/canary-app:bff-1.0

    docker.io/juanmatias/canary-app:bff-1.0

部署应用 (Deploy the app)

We will deploy all the elements into kungfootest namespace.

我们将所有元素部署到kungfootest命名空间中。

CD into the root project directory and then:

CD进入根项目目录,然后:

cd manifests

Deploy the backend apps:

部署后端应用程序:

kubectl apply -f deploy-backend.yaml -n kungfootest

Deploy the bff:

部署BFF:

kubectl apply -f deploy-bff.yaml -n kungfootest

Deploy the ingress:

部署入口:

kubectl apply -f ingress.yaml -n kungfootest

测试应用 (Test the app)

Get the public IP:

获取公共IP:

kubectl get ing -n kungfootest

You can test your app with this command:

您可以使用以下命令测试您的应用程序:

curl http://$PUBLICIP/kungfutest/mytest

You should have an output like this one:

您应该具有类似以下的输出:

{"id": "mytest", "response": "Congratulations! Version 1.0 of your application is running on Kubernetes."}

运行测试 (Run the tests)

We’ll run two tests, locust to know response times, and resources to know the used resources.

我们将运行两个测试,蝗虫以了解响应时间,资源以了解已使用的资源。

刺槐 (Locust)

From the project root dir:

从项目根目录:

cd locust

If the first time, create a virtual env and install locust:

如果是第一次,请创建一个虚拟环境并安装蝗虫:

pip install locust

Now, run the locust server:

现在,运行蝗虫服务器:

locust -f kungfootest.py

This will open Locust server listening on localhost:8089… open it with your browser.

这将打开在localhost:8089上侦听的Locust服务器…使用浏览器打开它。

There, you must add the host (e.g. http://$PUBLICIP), the max number of users and the users spawn rate. Then begin your tests.

在那里,您必须添加主机(例如http:// $ PUBLICIP ),最大用户数和用户生成速率。 然后开始测试。

I’ll test it with 100 users and a rate of 10 and let the test run for 2 minutes.

我将以100位用户和10位用户的速度对其进行测试,然后让测试运行2分钟。

资源资源 (Resources)

While locust test is running run the script resources.sh. When finish, just type CTRL+C and it will show the AVG mem and cpu.

在蝗虫测试运行时,运行脚本resources.sh。 完成后,只需键入CTRL + C,它将显示AVG mem和cpu。

NOTE: It’s important to keep in mind that this script will get the resources requested for nodes, and the real use only under kungfootest namespace.

注意:请务必记住,此脚本将获取节点所需的资源,并且仅在kungfootest名称空间下才能真正使用。

翻译自: https://medium.com/@juanmatias_25470/to-mesh-or-not-to-mesh-dc969394baae

齿轮的啮合频率

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值