aws lambda_恐怕您正在考虑AWS Lambda的冷启动完全错误

aws lambda

by Yan Cui

崔燕

恐怕您正在考虑AWS Lambda的冷启动完全错误 (I’m afraid you’re thinking about AWS Lambda cold starts all wrong)

When I dis­cuss AWS Lamb­da cold starts with folks in the con­text of API Gate­way, I often get respons­es along the line of:

当我在API Gateway的上下文中与其他人讨论AWS Lambda时,我经常会收到以下响应:

Meh, it’s only the first request right? So what if one request is slow, the next mil­lion requests would be fast.
嗯,这只是第一个请求吧? 因此,如果一个请求很慢,接下来的一百万个请求将很快。

Unfor­tu­nate­ly, that is an over­sim­pli­fi­ca­tion of what hap­pens.

不幸的是,这太简单了。

Cold start hap­pens once for each con­cur­rent exe­cu­tion of your func­tion.

冷启动发生一次为你的函数的每个并发执行

API Gate­way reuses con­cur­rent exe­cu­tions of your func­tion if pos­si­ble. Based on my obser­va­tions, it might even queue up requests for a short time in the hope that one of the con­cur­rent exe­cu­tions would fin­ish and become reusable.

如果可能,API Gateway会重用函数的并发执行。 根据我的观察,它甚至 可能 在短时间内将请求排队,以希望并发执行之一可以完成并变得可重用

If user requests hap­pen one after anoth­er, then you will only expe­ri­ence one cold start in the process. You can sim­u­late this using Charles proxy by repeating a cap­tur­ed request with a con­cur­ren­cy set­ting of 1.

如果用户的请求接连发生,那么您在此过程中只会经历一次冷启动。 您可以通过使用并发设置为1重复捕获的请求来使用Charles代理进行模拟。

As you can see in the time­line below, only the first request expe­ri­enced a cold start. The response for this request was much slow­er than the rest.

如您在下面的时间线中看到的,只有第一个请求经历了冷启动。 此请求的响应比其余请求慢得多。

1 out of 100 — that’s bear­able. Hell, it won’t even show up in my 99 per­centile laten­cy met­ric.

每100个中有1个是可以接受的。 天哪,它甚至不会出现在我的99%延迟指标中。

What if the user requests came in droves instead? After all, user behav­iours are unpre­dictable and unlike­ly to fol­low the nice sequen­tial pat­tern we see above. So let’s sim­u­late what hap­pens when we receive 100 requests with a con­cur­ren­cy of 10.

如果用户请求成群结队怎么办? 毕竟,用户行为是不可预测的,不可能遵循我们上面看到的良好的顺序模式。 因此,让我们模拟一下当我们收到100个并发为10的请求时会发生什么。

Now things don’t look quite as rosy — the first 10 requests were all cold starts! This is problematic if your traf­fic pat­tern is bursty around spe­cif­ic times of the day or spe­cif­ic events, for example:

现在事情看起来并不那么乐观-前10个请求都是冷启动! 如果您的流量模式在一天中的特定时间或特定事件中突然爆发,则这是有问题的,例如:

  • Food order­ing ser­vices (like JustEat and Deliv­eroo) have bursts of traf­fic around meal times

    餐饮订购服务(例如JustEat和Deliveroo)在用餐时间附近流量激增
  • e-com­mence sites have high­ly con­cen­trat­ed bursts of traf­fic around pop­u­lar shop­ping days of the year — like Cyber Mon­day and Black Fri­day

    电子商务网站在一年中的热门购物日(例如“网络星期一”和“黑色星期五”)的交通流量非常集中
  • Bet­ting ser­vices have bursts of traf­fic around sport­ing events

    博彩服务在体育赛事期间流量激增
  • Social net­works have bursts of traf­fic around notable events hap­pen­ing around the world

    社交网络在世界各地发生的重大事件周围流量激增

For these ser­vices, the sud­den bursts of traf­fic means API Gate­way would add more con­cur­rent exe­cu­tions of your Lamb­da func­tion. That equates to bursts of cold starts, and that’s bad news for you.

对于这些服务,流量的突然爆发意味着API Gateway将添加更多Lambda函数的并发执行。 这相当于突然开始冷门,这对您来说是个坏消息。

These are also the most cru­cial peri­ods for your busi­ness when you want your ser­vice to be on its best behav­ior.

当您希望服务保持最佳状态时,这些也是您业务最关键的时期。

If the spikes are predictable, then you can mitigate the effect of cold starts by pre-warming your API.

如果峰值是可预测的,则可以通过预热API来缓解冷启动的影响。

For example, in the case of a food ordering service, you know there will be a burst of traffic at noon. You can schedule a cron job using a CloudWatch scheduled event at 11:58am to trigger a Lambda function. This function would generate a burst of concurrent requests to force API Gateway to spawn the desired number of concurrent executions ahead of time.

例如,对于食品订购服务,您知道中午会有大量的交通。 您可以使用上午11:58的CloudWatch计划事件来计划cron作业,以触发Lambda函数。 此函数将生成大量并发请求,以强制API网关提前产生所需数量的并发执行。

You can use HTTP headers to tag these requests. The handling function can then distinguish them from normal user requests and short-circuit.

您可以使用HTTP标头标记这些请求。 然后,处理功能可以将它们与正常的用户请求和短路区分开。

Does it not betray the ethos of serverless computing that you shouldn’t have to worry about scaling?

它是否背叛了您无需担心扩展的无服务器计算的精神?

Yes, it does, but making users happy trumps everything else. Your users are not happy to wait for your function to cold start so they can order their food. The cost of switching to a competitor is so low nowadays, what’s stopping them from leaving you?

是的,的确如此,但是让用户满意比其他一切都重要 。 用户不愿意等待您的功能冷启动,以便他们点菜。 如今,转换竞争对手的成本如此之低,是什么阻止他们离开您呢?

You could also con­sid­er reduc­ing the impact of cold starts instead, by reduc­ing the duration of cold starts:

您还可以考虑通过减少冷启动的持续时间来减少冷启动的影响:

  • Author­ your Lamb­da func­tions in a lan­guage that doesn’t incur a high cold start time — that is, Node.js, Python, or Go

    使用不会产生高启动时间语言(即 Node.js,Python或Go)编写Lambda函数

  • Use high­er mem­o­ry set­ting for func­tions on the crit­i­cal path, includ­ing inter­me­di­ate APIs

    对关键路径上的功能(包括中间API)使用更高的内存设置
  • Opti­miz­e your function’s depen­den­cies and pack­age size

    优化函数的依赖性和包大小
  • Stay as far away from VPCs as you can! Lamb­da cre­ates ENIs (elas­tic net­work inter­face) to the tar­get VPC, which can add up to 10s (yeah, you’re read­ing it right) to your cold start

    尽可能远离VPC! Lambda为目标VPC创建ENI(弹性网络接口),这可以使冷启动最多增加10秒(是的,您没看错)

There are also two oth­er fac­tors to con­sid­er:

还需要考虑其他两个因素:

What about APIs that are sel­dom used? In that case, every invocation might be a cold start if too much time passes between invocations. To your users, these APIs are always slow, so they’re used less, and it becomes a vicious cycle.

很少使用的API呢? 在这种情况下,如果两次调用之间的时间间隔过长,则每次调用可能都是冷启动。 对于您的用户而言,这些API总是很慢,因此使用较少,这将成为一个恶性循环。

For these, you can use a cron job (as in, a CloudWatch scheduled event with a Lambda function as target) to keep them warm. The cron job would run every 5–10 mins and ping the API with a special request. By keeping these APIs warm, your users would not have to endure cold starts.

对于这些,您可以使用cron作业(例如,以Lambda函数为目标的CloudWatch计划事件)使它们保持温暖。 cron作业每5-10分钟运行一次,并通过特殊请求ping API。 通过使这些API保持温暖,您的用户将不必忍受冷启动。

This approach is less effective for busy functions with lots of concurrent executions. The ping message would only reach one of the concurrent executions, and there is no way to direct it to specific executions. In fact, there is no reliable way to know the exact number of concurrent executions for a function at all.

这种方法对于具有大量并发执行的繁忙功能不太有效。 ping消息将仅到达并发执行之一,并且无法将其定向到特定执行。 实际上,根本没有可靠的方法可以知道某个函数并发执行的确切次数。

Also, if the number of concurrent user requests drops, then it’s in your best interest to let idle executions be garbage collected. After all, you wouldn’t want to pay for unnecessary resources you don’t need.

另外,如果并发用户请求的数量减少,那么最好让空闲的执行被垃圾回收。 毕竟,您不需要支付不必要的不​​必要资源。

This post is not intend­ed to be your one-stop guide to AWS Lamb­da cold starts. It’s intended to illus­trate that talking about cold starts is a more nuanced dis­cus­sion than “the first request.

这篇文章并不是要成为AWS Lambda冷启动的一站式指南。 旨在说明,与“ 第一个请求 ”相比,谈论冷启动是一个更为细微的讨论

Cold starts are a char­ac­ter­is­tic of the plat­form that we have to live with. And we love the AWS Lamb­da plat­form and want to use it, as it deliv­ers on so many fronts. Nonetheless, it’s impor­tant to not let our own pref­er­ence blind us from what’s impor­tant. Keeping our users hap­py and building a prod­uct that they love is always the most important goal.

冷启动是我们必须使用的平台的特征。 我们喜欢AWS Lambda平台,并希望使用它,因为它在很多方面都可以提供。 尽管如此,重要的是不要让我们自己的偏好使我们看不到重要的东西。 让我们的用户满意并开发自己喜欢的产品始终是最重要的目标。

To that end, you do need to know the plat­form you’re build­ing on. With the cost of exper­i­men­ta­tion being so low, there’s no rea­son not to exper­i­ment with AWS Lamb­da your­self. Try to learn more about how it behaves and how you can make the most of it.

为此,您确实需要了解所构建的平台。 由于实验成本如此之低,因此没有理由不亲自试用AWS Lambda。 尝试了解有关它的行为以及如何充分利用它的更多信息。

翻译自: https://www.freecodecamp.org/news/im-afraid-you-re-thinking-about-aws-lambda-cold-starts-all-wrong-45078231fe7c/

aws lambda

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值