aws lambda_Google Cloud Run与AWS Lambda

aws lambda

By Joao Cardoso, IOD Expert

IOD专家 Joao Cardoso提供

When Amazon introduced Lambda in 2014, it was an absolute game changer. While it wasn’t the first serverless compute service or even the first Function as a Service (FaaS) product, Lambda quickly and undoubtedly became the most well-known product in these areas. And for good reasons: It made running code possible without thinking about servers, and its pricing model was the cherry on top. It took roughly a year and a half from the introduction of Lambda for Microsoft and Google to start offering their own FaaS products, which, in the end, didn’t bring much–if anything–new to the table.

当亚马逊在2014年推出Lambda时,它绝对改变了游戏规则。 尽管它不是第一个无服务器计算服务,甚至不是第一个功能即服务(FaaS)产品,但Lambda无疑Swift成为了这些领域中最知名的产品。 并有充分的理由:它使运行代码成为可能,而无需考虑服务器,并且其定价模型是最重要的。 从向Microsoft和Google引入Lambda到用了大约一年半的时间才开始提供自己的FaaS产品,最终这些产品并没有带来太多(如果有的话)新事物。

Fast forward to 2019, when Google introduced Cloud Run, the first serverless product since Lambda to catch my interest. Could this be the first real innovation in serverless computing since Lambda? After all, Lambda does have certain limitations that would be nice to see addressed–either by AWS or its competitors.

快进到2019年,当时Google推出了Cloud Run,这是Lambda以来第一个引起我兴趣的无服务器产品。 这可能是Lambda以来无服务器计算领域的第一个真正的创新吗? 毕竟, Lambda确实具有一定的局限性 ,无论是AWS还是其竞争对手,都可以很好地解决。

In this three-part series, I’ll be comparing Cloud Run to Lambda to find out where it comes out ahead (or falls behind), and which use cases may be better suited to each product. This comparison may start out on paper, but we’ll then put the price and performance of each product to the test with publicly reviewable benchmarks (to be as transparent as possible).

在这个由三部分组成的系列文章中,我将把Cloud Run和Lambda进行比较,以找出在哪些方面领先(或落后),哪些用例可能更适合每种产品。 可以从纸上开始进行这种比较,但是然后我们将使用公开审查的基准(尽可能透明)对每种产品的价格和性能进行测试。

Personal opinions are marked as such. Let’s begin!

个人意见已标记为此类。 让我们开始!

云跑 (Cloud Run)

Cloud Run runs containers and seamlessly handles scaling from zero to thousands of containers, all while charging only for the time used to run the code. Sound familiar? At a high-level, Cloud Run is to containers what Lambda is to functions. In fact, the pricing model is even the same: compute time is rounded up to the nearest 100 ms, with an additional fixed fee per request.

Cloud Run运行容器并无缝处理从零到数千个容器的扩展,所有这些仅按运行代码的时间收费。 听起来有点熟? 从高层次来看Cloud Run就是容器,而Lambda就是功能。 实际上,定价模型甚至是相同的:计算时间四舍五入到最接近的100毫秒,每个请求额外收取固定费用。

So is Cloud Run similar to AWS Fargate? Both run containers in a serverless fashion, but Fargate still requires a container orchestrator like Amazon ECS or EKS, while Cloud Run is a standalone solution. And although Fargate does have per-second billing, there’s a one-minute minimum, making it more suitable for longer-running tasks.

那么Cloud Run是否类似于AWS Fargate ? 两者都以无服务器方式运行容器,但是Fargate仍然需要容器协调器(例如Amazon ECSEKS) ,而Cloud Run是独立的解决方案。 尽管Fargate确实具有每秒计费功能,但最低计费时间为一分钟,这使其更适合于长时间运行的任务。

But before we get ahead of ourselves, you should know where Cloud Run came from to better understand the product. And that story starts with Knative.

但是,在我们超越自己之前,您应该知道Cloud Run的来源,以便更好地了解该产品。 这个故事始于Knative。

基尼特语 (Knative)

Back in 2018, Google, in partnership with companies including Pivotal Software (now VMware) and Red Hat, released Knative, yet another open-source serverless platform for Kubernetes. There were already several other projects in this space, including OpenFaas, OpenWhisk, and Kubeless. Knative’s differentiator was its aim to provide the building blocks needed to make serverless applications a reality on Kubernetes and not necessarily be a full-blown solution by itself.

早在2018年,Google与Pivotal Software(现为VMware)和Red Hat等公司合作发布了Knative ,这是Kubernetes的另一个开源无服务器平台。 这个领域中已经有其他一些项目,包括OpenFaasOpenWhiskKubeless 。 Knative的与众不同之处在于其目标是提供使无服务器应用程序在Kubernetes上成为现实所需的构建基块,而本身并不一定是成熟的解决方案。

Epsagon released a good comparison between Knative and other serverless frameworks for those who wish to learn more. For developers, Knative brings a set of high-level APIs that make it easy to run applications on Kubernetes using industry best practices.

Epsagon为希望了解更多信息的人在Knative和其他无服务器框架之间进行了很好的比较 。 对于开发人员而言,Knative带来了一组高级API,这些API可以轻松地使用行业最佳实践在Kubernetes上运行应用程序。

As of now, Knative has two main components:

截至目前,Knative具有两个主要组件:

  • Knative Serving handles deploying, running, and scaling containers, plus the routing of HTTP requests to the containers.

    Knative Serving负责部署,运行和扩展容器,以及将HTTP请求路由到容器。
  • Knative Eventing provides a declarative way to manage channels and subscriptions of events, with pluggable sources from different software systems. Events are delivered via HTTP, for example, to a Knative Serving service.

    Knative Eventing提供了一种声明性的方式来管理事件的频道和订阅,并具有来自不同软件系统的可插入源。 事件通过HTTP传递到例如Knative Serving服务。

But how does all of this relate to Cloud Run?

但是,所有这些与Cloud Run有何关系?

原生和云运行 (Knative and Cloud Run)

While Knative itself requires Kubernetes, Cloud Run was born from the Knative Serving API, which it implements almost fully. But Cloud Run is a fully managed service on its own without Kubernetes. To learn more about the differences and the unsupported parts of the API, I recommend reading “Is Google Cloud Run really Knative?

虽然Knative本身需要Kubernetes,但Cloud Run源自Knative Serving API,该API几乎可以完全实现。 但是Cloud Run本身就是一个完全托管的服务,没有Kubernetes。 要了解有关API的区别和不受支持的部分的更多信息,我建议阅读“ Google Cloud Run是否真的具有Knative功能?

In practice, the same applications and manifests can be ported between Cloud Run and Knative on Kubernetes with minimal to no changes.

在实践中,相同的应用程序和清单可以在Kubernetes上的Cloud Run和Knative之间进行移植,而几乎不需要更改。

Lambda vs.Cloud Run:纸上 (Lambda vs. Cloud Run: On Paper)

To me, Cloud Run took some of the best parts of Lambda while also attempting to improve on others. Let’s start by seeing what they have in common.

对我来说,Cloud Run吸收了Lambda的一些最佳功能,同时还尝试改进其他功能。 让我们开始看看它们的共同点。

Note: the term “call” is used from here on out to refer to either a Lambda invocation or a Cloud Run request, seeing as each service uses a different term for what is essentially the same thing.

注意:从现在开始,术语“呼叫”用于指代Lambda调用或Cloud Run请求,因为每个服务对于本质上相同的事物使用不同的术语。

缩放到零 (Scale-to-Zero)

Both solutions support scaling to zero, with billing in increments of 100 ms, which was always one of my favorite features of Lambda. It doesn’t matter if you have one service deployed or thousands; you only pay for the compute time resulting from a call, not for idle time.

两种解决方案都支持将比例缩放为零,并且计费以100毫秒为增量,这一直是Lambda的最喜欢的功能之一。 无论部署了一项服务还是数千项服务都没有关系。 您只需为通话产生的计算时间付费,而不为空闲时间付费。

无国籍 (Statelessness)

Due to the highly ephemeral nature of the containers and functions in these runtimes, along with the horizontal scalability they provide, all code must be stateless. Any state should always be stored elsewhere, and both cloud providers offer excellent services where states can be stored and which can be accessed from code running on either service.

由于这些运行时中容器和函数的高度短暂性,以及它们提供的水平可伸缩性,所有代码都必须是无状态的。 任何状态都应始终存储在其他位置,并且两个云提供商都提供出色的服务,可以在其中存储状态,并且可以从在任一服务上运行的代码访问状态。

并发模型 (Concurrency Model)

Here is where things start to differ. The concurrency level for any running Lambda function is always 1, meaning that each Lambda invocation can handle a single request at a time. During a spike of concurrent invocations, additional Lambda functions are needed, which may add to latency if they incur a cold start.

这是开始有所不同的地方。 任何正在运行的Lambda函数的并发级别始终为1,这意味着每个Lambda调用可以一次处理一个请求。 在并发调用高峰期间,需要其他Lambda函数,如果它们导致冷启动,可能会增加延迟。

I see the Lambda concurrency model mostly as an issue for latency-critical applications, especially when traffic is unpredictable and cold starts are frequent.

我将Lambda并发模型主要视为对延迟至关重要的应用程序的问题,尤其是在流量不可预测且冷启动频繁的情况下。

Cloud Run, on the other hand, allows for a concurrency limit to be set per container, from 1 to 80. In theory, for I/O-bound code, a single container should be able to handle multiple requests without the need to incur additional cold starts. And because pricing is tied to the compute time of a container, concurrent requests on the same container can be handled for little to no extra cost.

另一方面,Cloud Run允许为每个容器设置并发限制,从1到80。从理论上讲,对于受I / O约束的代码,单个容器应能够处理多个请求而无需引起额外的冷启动。 而且由于定价与容器的计算时间有关,因此可以以很少的费用或没有额外的费用处理同一容器上的并发请求。

It would be good to see Lambda unlock this concurrency limitation. Azure Functions, for example, does allow multiple concurrent execution on the same instance, as seen in this AWS Lambda vs. Azure Functions comparison.

很高兴看到Lambda解除了此并发限制。 例如,Azure Functions确实允许在同一实例上执行多个并发执行,如本AWS Lambda与Azure Functions的比较所示。

CPU和内存 (CPU and Memory)

The CPU and memory options for Cloud Run and Lambda are relatively different.

Cloud Run和Lambda的CPU和内存选项相对不同。

Lambda lets you pick the memory amount that’s available for a function, currently between 128 MB and 3,008 MB, in fixed increments of 64 MB. In Cloud Run, you can define any amount of memory between 128 MB and 2,048 MB (a lower limit than Lambda), and you can also pick either 1 or 2 vCPUs.

Lambda允许您选择功能可用的内存量,当前介于128 MB和3,008 MB之间,以64 MB为固定增量。 在Cloud Run中,您可以定义128 MB到2,048 MB(比Lambda更低的限制)之间的任意数量的内存,还可以选择1个或2个vCPU。

Image for post
Figure 1: Price vs. allocated memory and CPU for Cloud Run and Lambda 图1:价格与分配的内存和CPU以及Cloud Run和Lambda的价格

At first glance, it seems that Cloud Run is almost always more expensive. However, it’s vital to understand that CPU performance in Lambda is directly tied to the allocated memory. A Lambda function with 1,024 MB of memory has 8x the CPU performance of one with 128 MB, but at 8x the cost.

乍一看,Cloud Run似乎总是比较昂贵。 但是,了解Lambda中的CPU性能与分配的内存直接相关至关重要。 Lambda函数具有1,024 MB的内存,其CPU性能是128 MB的8倍,但成本却是8倍。

For CPU-bound code, it’s impossible to compare the price between Lambda and Cloud Run without benchmarking the two. Likewise, for I/O-bound code, it depends on whether a Cloud Run container is handling multiple concurrent requests, which dramatically affects the price (theoretically, it could result in an 80-fold decrease in price).

对于受CPU约束的代码,如果不对两者进行基准测试,则无法比较Lambda和Cloud Run之间的价格。 同样,对于受I / O约束的代码,这取决于Cloud Run容器是否正在处理多个并发请求,而这会极大地影响价格(从理论上讲,它可能导致价格降低80倍)。

In my opinion, Lambda’s pricing is much easier to estimate due to its concurrency model. With Cloud Run, I feel like the best we can do is calculate the worst-case scenario (concurrency = 1), throw our hands in the air, and conclude, “Well, it might be cheaper than that!”

我认为,由于Lambda的并发模型,其定价更容易估算。 使用Cloud Run,我觉得我们所能做的最好的就是计算最坏的情况(并发= 1),全力以赴,然后得出结论:“嗯,它可能比这便宜!”

HTTP本机 (HTTP Native)

Cloud Run supports HTTP/2 natively. Every call to a Cloud Run container is an HTTP request, and it is the developer’s responsibility to start an HTTP server and handle the request. Meanwhile, AWS’ runtimes and SDKs for Lambda do all the heavy lifting so that developers only need to implement the event handlers.

Cloud Run本机支持HTTP / 2。 每次对Cloud Run容器的调用都是一个HTTP请求,开发人员有责任启动HTTP服务器并处理该请求。 同时,AWS的Lambda运行时和SDK承担了所有繁重的工作,因此开发人员只需要实现事件处理程序即可。

Cloud Run does require a bit of extra work here, since most languages require a library/framework to help with the HTTP side of things. But exposing a service using HTTP does have a benefit: A container can be mapped directly to a domain (using HTTPS by default, at no extra cost), without necessarily requiring an API Gateway or load balancer, as is the case with Lambda.

Cloud Run确实需要一些额外的工作,因为大多数语言都需要一个库/框架来帮助解决HTTP方面的问题。 但是使用HTTP公开服务确实有一个好处:容器可以直接映射到域(默认情况下使用HTTPS,无需任何额外费用),而不必像Lambda那样需要API网关或负载平衡器。

Unfortunately, HTTP streaming is not supported by Cloud Run, which means that both WebSockets and streaming with gRPC are a no-go (although unary gRPC is supported). Lambda, on the other hand, does support WebSockets via the Amazon API Gateway. Since Knative supports HTTP streaming, this is something that I would personally like to see Cloud Run offer in the future.

不幸的是,Cloud Run不支持HTTP流传输,这意味着WebSocket和使用gRPC的流传输都是不可行的(尽管支持一元gRPC)。 另一方面,Lambda确实通过Amazon API Gateway支持WebSocket。 由于Knative支持HTTP流传输,因此我个人希望将来能看到Cloud Run提供的功能。

It is also worth noting here that Lambda has a maximum request and response size of 6 MB for each. For binary (non-text) HTTP responses from a Lambda function, the actual size is even smaller because the response must be returned Base64-encoded. Cloud Run has a much higher limit of 32 MB for each, which can be useful when dealing with larger payloads.

在此还值得注意的是,Lambda每个请求和响应的最大大小为6 MB。 对于来自Lambda函数的二进制(非文本)HTTP响应,实际大小甚至更小,因为必须将响应返回Base64编码。 Cloud Run每个服务器都有32 MB的更高限制,这在处理较大的有效负载时很有用。

开放性 (Openness)

Both Lambda and Cloud Run are proprietary systems, but that doesn’t mean they’re equally so.

Lambda和Cloud Run都是专有系统,但这并不意味着它们同样如此。

Lambda has supported runtimes for multiple languages such as JavaScript (Node.js), Java, Python, and Go and supports custom runtimes as well, whereas Cloud Run uses standard OCI images (commonly known as Docker images). Existing applications that are stateless and containerized should be deployable to Cloud Run as is.

Lambda支持多种语言的运行时,例如JavaScript(Node.js),Java,Python和Go,并且还支持自定义运行时,而Cloud Run使用标准OCI映像(通常称为Docker映像)。 无状态和容器化的现有应用程序应可按原样部署到Cloud Run。

The only proprietary part of Cloud Run is the infrastructure running the containers, as the runtime, API, and image format are all defined by Knative. Cloud Run can also alleviate the concerns of vendor lock-in, since it can be replaced by Knative running on any Kubernetes cluster.

Cloud Run的唯一专有部分是运行容器的基础架构,因为运行时,API和图像格式均由Knative定义。 Cloud Run还可以减轻供应商锁定的担忧,因为它可以由在任何Kubernetes集群上运行的Knative代替。

预配置并发 (Provisioned Concurrency)

Since the beginning of Lambda, there have been various hacks to keep functions “warm,” i.e., to prevent dreaded cold starts. Recently, AWS released the provisioned concurrency feature, finally allowing a configurable amount of Lambda functions to run continuously and be ready to work. It’s an easy solution to the common cold-start problem, and it supports auto-scaling that can automatically modify the provisioned concurrency over time based on real usage.

自Lambda诞生以来,已经出现了许多使功能保持“温暖”的骇客程序,例如,防止可怕的冷启动。 最近,AWS发布了预配置的并发功能,最终允许可配置数量的Lambda函数连续运行并准备工作。 这是解决常见冷启动问题的简便解决方案,并且支持自动缩放功能,该功能可以根据实际使用情况随时间自动修改预配置的并发性。

Similarly, Knative supports a minimum number of container instances, which is shown in the Cloud Run console, but is currently disabled with a message stating that it’s not yet supported.

同样,Knative支持最少数量的容器实例,这在Cloud Run控制台中显示,但是当前已禁用,并显示一条消息,指出尚不支持该实例。

下一步是什么? (What’s Next?)

Now that we’ve compared the two products on paper, it’s time to put them to the test. Part 2 will cover benchmarks and feature an analysis to better understand the performance and price/performance ratio of both products. Based on the results, conclusions can be drawn as to which service is better suited to different use cases. Stay tuned!

现在我们已经在纸上比较了两种产品,现在该对它们进行测试了。 第2部分将介绍基准测试并进行分析,以更好地了解这两种产品的性能和性价比。 根据结果​​,可以得出关于哪种服务更适合不同用例的结论。 敬请关注!

This post was originally featured at IOD, a blog that features weekly expert-based tech content for tech people.

这篇文章 最初是在IOD上发表的 ,该博客每周为技术人员提供基于专家的技术内容。

翻译自: https://medium.com/@IODCloudTech/google-cloud-run-vs-aws-lambda-85ee00f06828

aws lambda

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值