unity清漆shader_使用kubernetes创建可扩展且具有弹性的清漆集群

unity清漆shader

Recently I have been working on a project with a need for a scalable and resilient Varnish cluster. The interesting thing about such a project is the use of Kubernetes as a platform for our application stack. During the work on such project, I learned a lot of things, including the use of Go lang in Kubernetes controller as well as an understanding of how Varnish works.

最近,我一直在从事一个需要可伸缩且具有弹性的Varnish集群的项目。 这个项目的有趣之处在于,使用Kubernetes作为我们应用程序堆栈的平台。 在进行此类项目的过程中,我学到了很多东西,包括在Kubernetes控制器中使用Go lang以及对Varnish的工作原理有所了解。

什么是清漆? (What is Varnish?)

Varnish is a layer of HTTP cache that caches requests mostly for anonymous users before they hit an application layer. Typically Varnish cache is stored in RAM, which helps to achieve higher performance. If all available memory is used for cache, the last used cache items will be purged.

Varnish是HTTP缓存的一层,主要在匿名用户访问应用程序层之前对其进行缓存。 通常,Varnish缓存存储在RAM中,这有助于实现更高的性能。 如果所有可用内存都用于高速缓存,则将清除最后使用的高速缓存项。

The basic Varnish distribution is free and Open Source. HTTP cache works like depicted on the image below.

基本的Varnish发行版是免费和开源的。 HTTP缓存的工作方式如下图所示。

Traffic for the logged-in user or the one requesting dynamic content is not supposed to be cached, thus, it bypasses the Varnish caching layer and goes straight to the application service.

不应缓存已登录用户或请求动态内容的用户的流量,因此,它绕过了Varnish缓存层,直接进入应用程序服务。

However, if content is supposed to be cached, Varnish checks if the corresponding item exists in the cache and returns it, otherwise the request is forwarded to the app service, and the result of it is cached and returned back to the user.

但是,如果应该缓存内容,则Varnish会检查缓存中是否存在相应的项目并将其返回,否则,该请求将转发到应用程序服务,并将其结果缓存并返回给用户。

Usage of such architecture is rising up several questions:

这种架构的使用提出了几个问题:

  1. Can we eliminate Kubernetes service and let Varnish talk to app pods (backends) directly?

    我们是否可以消除Kubernetes服务并让Varnish直接与应用程序pod(后端)对话?
  2. Do we scale Varnish horizontally of vertically?

    我们是否在垂直方向上水平缩放Varnish?
  3. How do we scale Varnish pods (frontends)?

    我们如何缩放清漆豆荚(前端)?
  4. How do we shard cache if we have multiple Varnish pods?

    如果我们有多个Varnish Pod,如何分片缓存?
  5. How do we flush cache?

    我们如何刷新缓存?

Keep reading and you will find answers to these questions.

继续阅读,您会找到这些问题的答案。

Kube-httpcache控制器 (Kube-httpcache Controller)

While I was trying to figure out answers to the questions above, I found an exciting Open Source project, kube-httpcache, which is a Varnish controller for Kubernetes. Since it is an Open Source project, I have significantly evolved it so it was able to handle all the features I needed and covered all the questions I had.

当我试图找出上述问题的答案时,我发现了一个令人兴奋的开源项目kube-httpcache ,它是Kubernetes的Varnish控制器。 由于它是一个开源项目,因此我对其进行了重大改进,因此它能够处理我需要的所有功能并涵盖了我遇到的所有问题。

At that time out of the box, kube-httpcache, allowed to eliminate use a Kubernetes service for application, so Varnish was able to talk to backend pods directly as shown in the image below.

当时,开箱即用的kube-httpcache允许消除使用Kubernetes服务进行应用程序的使用,因此Varnish能够直接与后端Pod通信,如下图所示。

Image for post

In this case, Varnish is aware of all the running backends, and routes traffic to them according to the algorithm set in VCL, a Varnish config file.

在这种情况下,Varnish会知道所有正在运行的后端,并根据VCL(Varnish配置文件)中设置的算法将流量路由到它们。

Every time a new backend pod is added, Varnish controller becomes aware of it updates Varnish configuration on the fly.

每次添加新的后端Pod时,Varnish控制器就会意识到它会动态更新Varnish配置。

For this purpose, Go templating language is used in processing VCL template file. In the example below, a round robin algorithm is used to select a next backend pod.

为此,在处理VCL模板文件时使用Go模板语言。 在下面的示例中,使用循环算法来选择下一个后端Pod。

sub vcl_init {
new lb = directors.round_robin();
{{ range .Backends -}}
lb.add_backend(be-{{ .Name }});
{{ end }}
}

清漆水垢 (Varnish scaling)

There is a way you can scale Varnish vertically if your Kubernetes cluster supports this feature. However, this way is discouraged because there is a single point of failure. If your Varnish pod goes down, there won’t be any other pod to handle traffic right away. For this purpose, horizontal scaling is preferred. Also, horizontal scaling is easier to manage.

如果您的Kubernetes集群支持此功能,则有一种方法可以垂直缩放Varnish。 但是,不建议使用这种方法,因为存在单点故障。 如果您的Varnish窗格掉线,将不会有其他任何窗格立即处理流量。 为此,首选水平缩放。 同样,水平缩放更易于管理。

Horizontal does not need to be automatic, but could be manual instead. There are a couple of problems with fully automatic scaling as you will see later.

水平不需要是自动的,而可以是手动的。 全自动缩放存在两个问题,您将在稍后看到。

水平缩放 (Horizontal scaling)

I have been working on the feature that allows kube-httpcache controller not only to monitor backend pods but also to monitor frontend pods i.e., to be self-aware of own Varnish instances that are running as part of the cluster.

我一直在研究该功能,该功能使kube-httpcache控制器不仅可以监视后端Pod,还可以监视前端Pod,即可以自我感知正在作为群集一部分运行的自己的Varnish实例。

Since we have multiple Varnish instances, we can also shard cache across them. This article describes how to build a self-routing Varnish Cluster.

由于我们有多个Varnish实例,因此我们也可以在它们之间分片缓存。 本文介绍如何构建自路由的Varnish群集。

Image for post

In this example, a user requests the resource, which is supposed to be cached by Varnish Frontend 1. However, traffic randomly goes to Varnish Frontend 2. The later frontend (2) determines with the help of the hashing algorithm that this resource is supposed to be cached by the earlier Varnish instance (1).

在此示例中,用户请求应该由Varnish前端1缓存的资源。但是,流量随机流向Varnish前端2。稍后的前端(2)在散列算法的帮助下确定该资源应为由早期的Varnish实例(1)缓存。

In case, if the resource is in the cache of Varnish Frontend 1, the cached result is returned to a user, otherwise, Varnish sends a request to one of the app backends, caches, and returns it.

如果资源位于Varnish前端1的缓存中,则缓存的结果将返回给用户,否则,Varnish会将请求发送到应用后端之一,进行缓存并返回。

The configuration for that looks like following.

其配置如下所示。

sub vcl_init {
new cluster = directors.hash();
{{ range .Frontends -}}
cluster.add_backend({{ .Name }}, 1);
{{ end }}
new lb = directors.round_robin();
{{ range .Backends -}}
lb.add_backend(be-{{ .Name }});
{{ end }}
}sub vcl_recv {
# Set backend hint for non cachable objects.
set req.backend_hint = lb.backend();
# ...
# Routing logic.
# Pass a request to an appropriate Varnish node.
unset req.http.x-cache;
set req.backend_hint = cluster.backend(req.url);
set req.http.x-shard = req.backend_hint;
if (req.http.x-shard != server.identity) {
return(pass);
}
set req.backend_hint = lb.backend();
# ...
return(hash);
}

The only downside of this approach is that when we change the number of Varnish pods, old hashes does not relate to new Varnish nodes. This is why autoscaling affects the performance significantly. Fortunately, there is a solution for that.

这种方法的唯一缺点是,当我们更改Varnish Pod的数量时,旧哈希与新的Varnish节点无关。 这就是为什么自动缩放会显着影响性能的原因。 幸运的是,有一个解决方案。

一致的哈希 (Consistent hashing)

From the Varnish documentation, I figured out there is shard director which behaves similarly to hash director, except it uses consistent hash algorithm. The benefit of this algorithm is when new Varnish frontend is added, most of the old hashes are still related to their Varnish frontends, while a few hashes are associated with different Varnish frontends.

Varnish文档中 ,我发现有一个碎片导向器,其行为与哈希导向器类似,不同之处在于它使用一致的哈希算法。 此算法的好处是,当添加新的Varnish前端时,大多数旧哈希仍与它们的Varnish前端相关,而一些哈希与不同的Varnish前端相关。

Consistent hashing is based on mapping each resource to a point on a ring. Shard director maps each available Varnish frontend to many pseudo-randomly distributed points on the same ring. To find a Varnish frontend with a cached resource, the shard director finds the location of that resource key on the ring; then walks around the ring until falling into the first Varnish frontend it encounters.

一致的散列基于将每个资源映射到环上的某个点。 碎片导向器将每个可用的Varnish前端映射到同一环上的许多伪随机分布的点。 为了找到具有缓存资源的Varnish前端,分片导向器会在环上找到该资源密钥的位置。 然后绕环走,直到掉入遇到的第一个Varnish前端。

In this case configuration look like following.

在这种情况下,配置如下所示。

sub vcl_init {
new cluster = directors.shard(); {{ range .Frontends -}}cluster.add_backend({{ .Name }});
{{ end }} cluster.set_warmup(180);cluster.reconfigure(); new lb = directors.round_robin(); {{ range .Backends -}}
lb.add_backend(be-{{ .Name }});
{{ end }}
}sub vcl_recv {
# Set backend hint for non cachable objects.
set req.backend_hint = lb.backend(); # ...# Routing logic.
# Pass a request to an appropriate Varnish node.
unset req.http.x-cache;
set req.backend_hint = cluster.backend(by=URL);
set req.http.x-shard = req.backend_hint;
if (req.http.x-shard != server.identity) {
return(pass);
}
set req.backend_hint = lb.backend(); # ... return(hash);
}

刷新缓存 (Flushing cache)

Both hash director and shard director can be used to flush a single resource, but what if we want to flush multiple resources tagged with the same tag?

哈希导向器和碎片导向器都可以用于刷新单个资源,但是如果我们要刷新带有相同标签的多个资源怎么办?

In this case, we need to a pass flush signal to all of the Varnish frontends. For this purpose, there is a varnish signaller component built-in into kube-httpcache. Once you send a request to varnish signaller, it broadcasts it to all of the Varnish frontends.

在这种情况下,我们需要向所有Varnish前端传递冲洗信号。 为此,在kube-httpcache中内置了一个清漆信号器组件。 将请求发送给清漆信号发送器后,它会将其广播到所有清漆前端。

结论 (Conclusion)

As we saw, creating a scalable and resilient Varnish cluster requires knowledge about different aspects of Varnish, fortunately, kube-httpcache handles most of the work. Feel free to try this project and let me know what you think.

如我们所见,创建可伸缩且具有弹性的Varnish集群需要有关Varnish不同方面的知识,幸运的是, kube-httpcache可以处理大部分工作。 随时尝试这个项目,让我知道您的想法。

翻译自: https://medium.com/@dealancer/creating-a-scalable-and-resilient-varnish-cluster-using-kubernetes-853f03ec9731

unity清漆shader

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值