OpenResty:无服务器的瑞士军队代理; WAL,Slack,Zapier和Auth —未来

A while ago I started writing an identity aware proxy (IAP) to secure a binary with authentication. However, what started as a minimal auth layer has grown with features. What I have come to appreciate is that the reverse proxy is a great layer to do a variety of cross-cutting concerns like auth, at-least-once delivery and adapting. Furthermore, I have found OpenResty provides amazing performance and flexibility, AND it fits the serverless paradigm almost perfectly.

不久前,我开始编写身份识别代理(IAP)以通过身份验证保护二进制文件。 但是,从最小的身份验证层开始的功能已经发展起来。 我已经意识到,反向代理是完成各种跨领域问题(例如auth, 至少一次交付和适应)的重要基础。 此外,我发现OpenResty提供了惊人的性能和灵活性,并且几乎完美地契合了无服务器的范例。

Concretely I have been working on extending the IAP to ingest and reshape signals from Slack and Zapier, tunnel them through a Write Ahead Log (WAL) and verify their authenticity all before they hit our application binary. It turns out there are huge technical advantages in doing these integrations at the proxy layer.

具体来说,我一直在扩展IAP,以吸收和重塑来自Slack和Zapier的信号,通过预写日志(WAL)传送它们的通道,并在它们到达我们的应用程序二进制文件之前验证它们的真实性。 事实证明,在代理层进行这些集成具有巨大的技术优势。

The first win for the proxy is as a general purpose adapter. You often need to change the shape of the JSONs being exchanged between independently developed services. Given the utility of wrapping services with a common auth layer anyway, it makes sense this is a convenient point to do the domain mapping too. With OpenResty you get to do this in a performant binary.

代理的第一个胜利是作为通用适配器。 您通常需要更改在独立开发的服务之间交换的JSON的形状。 鉴于使用通用身份验证层包装服务的实用性,这样做域映射也很方便。 使用OpenResty,您可以在高性能二进制文件中执行此操作。

The second win was using the proxy to minimize response latency. Slack insists bots reply within 3 seconds. If an upstream is a serverless JVM process, you can easily timeout when upstream is cold. We solved this in the proxy layer, by buffering incoming requests into a managed queue, somewhat like a Write Ahead Log (WAL). This meant we could lower latency by replying to Slack’s webhook as soon as the queue acknowledged the write. As OpenResty is c + lua, so fast to startup we do the best we can in a serverless environment.

第二个胜利是使用代理来最大程度地减少响应延迟。 Slack坚持认为漫游器会在3秒内回复。 如果上游是无服务器的JVM进程,则当上游冷时,您很容易超时。 我们通过将传入请求缓冲到托管队列中(有点像预写日志(WAL))在代理层中解决了此问题。 这意味着只要队列确认了写入,我们就可以通过回复Slack的webhook来降低延迟。 由于OpenResty是c + lua,因此启动速度如此之快,所以我们会在无服务器环境中尽力而为。

With the WAL, we get at-least-once delivery semantics. Putting a WAL at the proxy layer can paper over a ton of upstream reliability issues. This implies that as long as upstream is idempotent, you don’t need retry logic upstream. This simplifies application development, and widens the stack choice upstream. Specifically for us, it meant a slow start JVM binary did not need to be rewritten in order to be deployed on serverless.

使用WAL,我们可以获得至少一次的传递语义。 将WAL放在代理层可以解决大量上游可靠性问题。 这意味着只要上游是幂等的,就不需要上游重试逻辑。 这简化了应用程序开发,并扩大了上游的堆栈选择。 特别是对我们而言,这意味着无需重新启动就可以将慢速启动的JVM二进制文件部署到无服务器上。

Finally we could verify the authenticity of incoming messages fast, so that potential attacks are stopped before consuming more expensive resources upstream. Again, OpenResty is likely to be faster (and therefore cheaper) than an application server at this rote task. We found it relatively painless to store secrets in secretmanager and retrieve them in the proxy.

最终,我们可以快速验证传入消息的真实性,从而在潜在的攻击被阻止之前消耗了上游更昂贵的资源。 同样,在此死任务中,OpenResty可能比应用程序服务器更快(因此更便宜)。 我们发现将秘密存储在secretmanager中并在代理中进行检索相对比较容易。

Overall we found that OpenResty is almost the perfect technology for Serverless. It gives you C performance (response latency as low as 5ms), fast startup times (400ms cold starts on Cloud Run) and the production readiness of Ngnix, all whilst giving you the flexibility to customize and add features to your infrastructure boundary with Lua scripting.

总体而言,我们发现OpenResty几乎是无服务器的完美技术。 它为您提供C性能(响应延迟低至5毫秒),快速启动时间(在Cloud Run上冷启动为400毫秒)和Ngnix的生产就绪状态,同时还使您能够灵活地使用Lua脚本自定义基础架构边界并向其基础架构添加功能。

It’s worth noting that Cloud Run scales to zero (unlike Fargate) and supports concurrent operations (unlike Lambda and Google Cloud Functions). OpenResty + Cloud Run will allow you to serve a ton of concurrent traffic on a single instance and thus I expect it to be most cost efficient out of the options. While its cold start is higher than, say, Lambda (we get 400ms vs 200ms), because it needs less scaling events, I expect incidents of cold starts to be less frequent for most deployments.

值得注意的是, Cloud Run可缩放为零(与Fargate不同)并支持并发操作(与Lambda和Google Cloud Functions不同)。 OpenResty + Cloud Run将使您能够在单个实例上提供大量并发流量,因此我希望它在所有选项中都是最具成本效益的。 虽然它的冷启动要比Lambda高 (我们得到400ms vs 200ms ),但是由于它需要更少的扩展事件,所以我预计对于大多数部署来说,冷启动的事件会更少。

Having the proxy handle more use cases (e.g. retry logic) moves cost out of application binaries and into the slickest part of the infra. You don’t need a kubernetes cluster to reap all these benefits, but you could deploy it in a cluster if you wish. We have managed to package all our functionality into a single small serverless service, deployed by Terraform, MIT licensed.

让代理处理更多用例(例如重试逻辑)可以将成本从应用程序二进制文件中移出,并移至下文最精巧的部分。 您不需要Kubernetes集群即可获得所有这些好处,但是您可以根据需要将其部署在集群中。 我们已经设法将所有功能打包到一个小型无服务器服务中,服务由MIT许可的Terraform部署

Now I will talk more specifics about how our particular development went, and how we solved various tactical development issues. This will probably be too detailed for most readers, but I have tried to order them by generalizability.

现在,我将详细讨论我们的特定开发如何进行以及如何解决各种战术开发问题。 对于大多数读者来说,这可能太详细了,但是我试图通过概括来对它们进行排序。

使用Terraform和Docker-compose进行本地开发 (Local development with Terraform and Docker-compose)

The slowness of Cloud Run deployments prevented intricate proxy feature development, so the first issue to solve the local development problem. As Cloud Run ultimately deploys dockerfiles, we used a docker-compose to bring up our binary along with a GCE Metadata Emulator. Our dockerfile is generated from Terraform templates, but you can ask Terraform to generate that local file, without deploying the whole project, with the -target flag. Thus we can create a semi decent development loop by sequencing terraform artifact generation and docker-compose in a loop, and no need to rewrite the Terraform recipe to support local development!

Cloud Run部署的缓慢性阻止了复杂的代理功能开发,因此解决本地开发问题的第一个问题。 随着Cloud Run最终部署dockerfile,我们使用了docker -compose来创建我们的二进制文件以及一个GCE Metadata Emulator 。 我们的dockerfile是从Terraform模板生成的,但是您可以要求Terraform使用-target标志来生成该本地文件, 而无需部署整个项目 。 因此,我们可以通过对terraform工件生成和docker-compose进行排序来创建一个半体面的开发循环,而无需重写Terraform配方来支持本地开发!

With the above shell script, when you press CTRL + C in the shell the binary will update and rerun. The tricky bit is exiting this loop! If you invoke it with “/bin/bash test/dev.sh” it will be named bash so you can exit with “killall bash”. The Oauth 2.0 redirect_uri will not work with localhost, so you will need to copy prod tokens with /login?token=true from the prod deployment.

使用上面的Shell脚本,当您在外壳中按CTRL + C时,二进制文件将更新并重新运行。 棘手的一点是退出此循环! 如果使用“ / bin / bash test / dev.sh”调用它,它将被命名为bash,因此您可以使用“ killall bash”退出。 Oauth 2.0 redirect_uri不适用于localhost,因此您需要从生产部署中复制带有/ login?token = true的生产令牌。

使用发布/订阅添加预写日志 (Adding a Write Ahead Log with Pub/Sub)

To be able to confidently respond to incoming requests fast, a general purpose internal location /wal/... was added to the proxy. Any request forwarded to /wal/<PATH> was assumed to be destined to be for UPSTREAM/<PATH>, but would travel via a Pub/Sub topic and subscription. This offloads the persistent storage of the buffer to a dedicated service with great latency and durability guarantees.

为了能够快速自信地响应传入的请求,在代理中添加了一个通用的内部位置/wal/... 假定转发到/wal/<PATH>任何请求注定是针对UPSTREAM/<PATH> ,但将通过Pub / Sub主题和订阅传播。 这将缓冲区的持久性存储分流到专用服务,并具有极大的延迟和持久性保证。

Each WAL message is essentially encapsulating a HTTP request. So the headers, uri, method and body were placed in an envelope and sent to PubSub. The request body was mapped to the base64 encoded Pub/Sub data field, and we used Pub/Sub attributes to store the rest.

每个WAL消息实质上都封装了一个HTTP请求。 因此, 标头 uri方法正文被放置在信封中并发送到PubSub。 请求主体被映射到base64编码的Pub / Sub数据字段,并且我们使用Pub / Sub属性存储其余部分。

Image for post

Calling Pub/Sub is a simple POST request to a topic provisioned by Terraform.

调用Pub / Sub是对Terraform提供的主题的简单POST请求。

A Pub/Sub subscription was provisioned that will push the envelopes back to the proxy to location /wal-playback. By specifying an oidc_token, Pub/Sub will add an ID token that can be verified in the proxy.

提供了发布/订阅订阅,该订阅会将信封推送回代理服务器,以定位到/wal-playback 。 通过指定oidc_token ,Pub / Sub将添加一个可以在代理中验证的ID令牌。

Image for post

In the OpenResty config we expose /wal-playback to the internet, but we verify the incoming token before unpacking the envelope and sending upstream.

在OpenResty配置中,我们将/wal-playback暴露给Internet,但是我们在打开信封的包装并向上游发送之前验证传入的令牌。

For our use case our upstream was hosted on Cloud Run too. If the upstream response was status code 429 (Too Many Requests) this means the container is scaling up and should be retried. Similarly, a status code 500 means upstream is broke and the request should be retried. For these response codes the proxy returns status 500 to Pub/Sub, which triggers its retry behaviour, leading to at-least-once delivery semantics.

对于我们的用例,我们的上游也托管在Cloud Run上。 如果上游响应是状态码429(请求太多),则表明容器正在扩展,应重试。 同样,状态码500表示上游中断了,应该重试该请求。 对于这些响应代码,代理将状态500返回给Pub / Sub,从而触发其重试行为,从而导致至少一次传递语义。

In our deployment the Zapier and Slack integrations used the /wal/ proxy.

在我们的部署中,Zapier和Slack集成使用了/wal/代理。

整合松弛 (Integrating Slack)

We wanted to boost internal productivity by adding custom “slash commands” to our internal business workflow engine. Creating an internal bot and regering a new command is very easy, you just need to supply a public endpoint.

我们希望通过向内部业务工作流程引擎添加自定义的“斜杠命令”来提高内部生产力。 创建内部机器人并重新编写新命令非常容易,您只需要提供一个公共端点即可。

Image for post

Slack sends outbound x-www-form-urlencoded webhooks. Of course, our upstream speaks JSON but it is trivial to convert using the resty-reqargs package.

Slack发送出站x-www-form-urlencoded Webhooks。 当然,我们的上游使用JSON,但是使用resty-reqargs包进行转换很简单。

Image for post

As this is a public endpoint we need to ensure the authenticity of the request. Slack usings a shared symmetric signing key. As we don’t want secrets anywhere near Terraform. We manually copy the key into Google Secret Manager.

由于这是一个公共端点,因此我们需要确保请求的真实性。 Slack使用共享的对称签名密钥。 因为我们不需要Terraform附近的任何秘密。 我们将密钥手动复制到Google Secret Manager中

Image for post

Then we only need to store the resource id of the key in Terraform. BTW, Secret Manager is excellent! You can reference the latest version so you can rotate secrets without bothering Terraform.

然后,我们只需要将键的资源ID存储在Terraform中即可。 顺便说一句, 秘密经理很棒! 您可以参考最新版本,以便无需担心Terraform即可旋转秘密。

Image for post

In the OpenResty config we fetch the secret with an authenticated GET and base64 decode. We store the secret in a global variable for use across requests.

在OpenResty配置中,我们使用经过身份验证的GET和base64解码来获取机密。 我们将机密信息存储在全局变量中,以供跨请求使用。

The Slack documentation is pretty good at explaining how to verify a request. Using the resty.hmac package it was only a few lines of lua:

Slack文档非常擅长于说明如何验证请求 。 使用resty.hmac包,只有几行lua:

Image for post

Of course the real difficulty with Slack is the 3 second timeout requirement, so inbound slack commands were forwarded to the WAL for a quick response with at-least-once delivery semantics.

当然,使用Slack的真正困难在于需要3秒的超时时间,因此将入站的Slack命令转发到WAL,以实现具有至少一次传送语义的快速响应。

整合Zapier (Integrating Zapier)

Zapier is another great bang-for-buck integration. Once you have an Identity Aware Proxy, it’s simple to create an internal app that can call your APIs.

Zapier是另一种出色的集成解决方案。 一旦有了Identity Aware Proxy ,就可以轻松创建一个可以调用您的API的内部应用程序。

After creating a Zapier App, you need to add Zapier as an authorized redirect URL.

创建Zapier应用后,您需要将Zapier添加为授权的重定向URL。

Image for post

For the authorization to work indefinitely with Google Auth, you need to add parameters to the token request to enable offline access.

为了使授权可以无限期地与Google Auth一起使用,您需要向令牌请求中添加参数以启用离线访问。

Image for post

You only need email scope:

您只需要电子邮件范围:

Image for post

And for the refresh token endpoint to work correctly you need to add the client id and secret:

为了使刷新令牌端点正常工作,您需要添加客户端ID和密码:

Image for post

To send a signal from Zapier to our internal software, we created an Action called signal, that had a name key plus a string, string dictionary of variables. This seems like a minimal but flexible schema.

为了将信号从Zapier发送到我们的内部软件,我们创建了一个称为signal的Action,该Action具有一个名称键以及一个字符串,变量字符串字典。 这似乎是最小但灵活的模式。

Image for post

Its very nice Zapier works with Oauth 2.0, and it helped verify the correctness of our own identity implementation.

其非常出色的Zapier可与Oauth 2.0配合使用,并有助于验证我们自己的身份实现的正确性。

学到更多 (Learn more)

Our internal workflow engine is being developed completely open source and MIT licensed. Read more about our vision of OpenAPI based digital process automation.

我们的内部工作流引擎正在完全开放源代码开发并获得MIT许可。 阅读有关我们对基于OpenAPI的数字过程自动化的愿景的更多信息。

Follow development on Linkedin or Medium

关注LinkedinMedium上的开发

Originally published at https://futurice.com.

最初发布在 https://futurice.com

翻译自: https://medium.com/@tom.larkworthy/openresty-a-swiss-army-proxy-for-serverless-wal-slack-zapier-and-auth-futurice-2aeb11629c15

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值