aws lambda_如何为AWS Lambda实施日志聚合

本文介绍了如何在AWS Lambda中实施日志聚合,以避免调用期间的额外网络开销,提高效率。建议在函数执行后处理CloudWatch Logs,可以直接将日志流式传输到Amazon Elasticsearch服务或通过Lambda函数发送到其他日志聚合服务。同时提醒注意避免自循环订阅日志组,以及调整日志保留策略以降低成本。
摘要由CSDN通过智能技术生成

aws lambda

by Yan Cui

崔燕

如何为AWS Lambda实施日志聚合 (How to implement log aggregation for AWS Lambda)

Dur­ing the exe­cu­tion of a Lamb­da func­tion, what­ev­er you write to std­out (for example, using console.log in Node.js) will be cap­tured by Lamb­da and sent to Cloud­Watch Logs asyn­chro­nous­ly in the back­ground. And it does this with­out adding any over­head to your func­tion exe­cu­tion time.

在执行Lambda函数期间,您写入stdout的任何内容(例如,使用Node.js中的console.log )都会被Lambda捕获,并在后台异步发送到CloudWatch Logs。 这样做不会增加函数执行时间的开销。

You can find all the logs for your Lamb­da func­tions in Cloud­Watch Logs. There is a unique log group for each func­tion. Each log group then consists of many log streams, one for each concurrently executing instance of the function.

您可以在CloudWatch Logs中找到Lambda函数的所有日志。 每个功能都有一个唯一的日志组。 每个日志组则由许多日志流组成,每个并发执行该功能的实例一个。

You can send logs to Cloud­Watch Logs your­self via the Put­Lo­gEvents oper­a­tion. Or you can send them to your pre­ferred log aggre­ga­tion ser­vice such as Splunk or Elas­tic­search.

您可以自己通过PutLogEvents操作将日志发送到CloudWatch Logs。 或者,您可以将它们发送到首选的日志聚合服务,例如Splunk或Elasticsearch。

But, remem­ber that every­thing has to be done dur­ing a function’s invocation. If you make addi­tion­al net­work calls dur­ing the invo­ca­tion, then you’ll pay for that addi­tion­al exe­cu­tion time. Your users would also have to wait longer for the API to respond.

但是,请记住, 在函数调用期间必须完成所有操作 。 如果您在调用期间进行了其他网络调用,则需要为该额外的执行时间付费。 您的用户还必须等待更长的时间才能使API响应。

These extra network calls might only add 10–20ms per invocation. But you have microservices, and a single user action can involve several API calls. Those 10–20ms per API call can compound and add over 100ms to your user-facing latency, which is enough to reduce sales by 1% according to Amazon.

这些额外的网络调用每次调用可能只会增加10–20ms。 但是您拥有微服务,单个用户操作可能涉及多个API调用。 根据Amazon的说法,每个API调用需要10-20毫秒的时间,这会使您面对用户的延迟加重并增加100毫秒以上,这足以使销售量减少1%

So, don’t do that!

所以,不要那样做!

Instead, process the logs from Cloud­Watch Logs after the fact.

相反,请在事实之后处理CloudWatch Logs中的日志。

In the Cloud­Watch Logs con­sole, you can select a log group and choose to stream the data direct­ly to Amazon’s host­ed Elas­tic­search ser­vice.

在CloudWatch Logs控制台中,您可以选择一个日志组,然后选择将数据直接流式传输到Amazon托管的Elasticsearch服务。

This is very use­ful if you’re using the host­ed Elas­tic­search ser­vice already. But if you’re still eval­u­at­ing your options, then give this post a read before you decide on the AWS-host­ed Elas­tic­search.

如果您已经在使用托管的Elasticsearch服务,这将非常有用。 但是,如果您仍在评估选项,则在决定由AWS托管的Elasticsearch之前,请阅读此文章

You can also stream the logs to a Lamb­da func­tion instead. There are even a num­ber of Lambda function blue­prints for push­ing Cloud­Watch Logs to oth­er log aggre­ga­tion ser­vices already.

您也可以将日志流传输到Lambda函数。 甚至还有许多Lambda功能蓝图,用于将CloudWatch Logs推送到其他日志聚合服务。

Clear­ly this is some­thing a lot of AWS’s cus­tomers have asked for.

显然,这是许多AWS客户所要求的。

You can use these blue­prints to help you write a Lamb­da func­tion that’ll ship Cloud­Watch Logs to your pre­ferred log aggre­ga­tion ser­vice. But here are a few more things to keep in mind.

您可以使用这些蓝图来帮助您编写Lambda函数,该函数会将CloudWatch Logs运送到您首选的日志聚合服务。 但是,还有几件事要牢记。

When­ev­er you cre­ate a new Lamb­da func­tion, it’ll cre­ate a new log group in Cloud­Watch logs. You want to avoid a man­u­al process for sub­scrib­ing log groups to your log shipping func­tion.

每当您创建新的Lambda函数时,它将在CloudWatch日志中创建一个新的日志组。 您希望避免手动将日志组订阅到日志传送功能的过程。

Instead, enable Cloud­Trail, and then set­up an event pat­tern in Cloud­Watch Events to invoke anoth­er Lamb­da func­tion when­ev­er a log group is cre­at­ed.

相反,启用CloudTrail,然后在CloudWatch Events中设置事件模式以在创建日志组时调用另一个Lambda函数。

You can do this one-off set­up in the Cloud­Watch con­sole.

您可以在CloudWatch控制台中进行一次性设置。

If you’re work­ing with mul­ti­ple AWS accounts, then you should avoid mak­ing the set­up a man­u­al process. With the Server­less frame­work, you can set­up the event source for this subscribe-log-group func­tion in the serverless.yml.

如果您使用多个AWS账户,则应避免手动进行设置。 使用Serverless框架,您可以在serverless.yml中为该serverless.yml subscribe-log-group函数设置事件源。

Anoth­er thing to keep in mind is that you need to avoid sub­scrib­ing the log group for the ship-logs func­tion to itself. It’ll cre­ate an infi­nite invo­ca­tion loop and that’s a painful les­son that you want to avoid.

要记住的另一件事是, 您需要避免 为自身 ship-logs 功能 订阅日志组 这将创建一个无限循环调用 ,这就是要避免一个惨痛的教训。

One more thing.

还有一件事。

By default, when Lamb­da cre­ates a new log group for your func­tion, the retention pol­i­cy is set to Never Expire. This is overkill, as the data storage cost can add up over time. It’s also unnecessary if you’re shipping the logs elsewhere already!

默认情况下,当Lambda为您的功能创建新的日志组时,保留策略将设置为Never Expire 。 这太过分了,因为随着时间的推移, 数据存储成本可能会增加。 如果您已经将日志发送到其他地方,则也没有必要!

We can apply the same tech­nique above and add anoth­er Lamb­da func­tion to automatically update the reten­tion pol­i­cy to some­thing more rea­son­able.

我们可以应用上面相同的技术,并添加另一个Lambda函数以将保留策略自动更新为更合理的方法。

If you already have lots of exist­ing log groups, then con­sid­er writing one-off scripts to update them all. You can do this by recurs­ing through all log groups with the DescribeL­og­Groups API call.

如果您已经有很多现有的日志组,请考虑编写一次性脚本来更新它们。 你可以做到这一点递归通过与所有的日志组DescribeLogGroups API调用。

If you’re interested in applying these techniques yourself, I have put together a simple demo project for you. If you follow the instructions in the README and deploy the functions, then all the logs for your Lambda functions would be delivered to Logz.io.

如果您有兴趣亲自应用这些技术,那么我为您准备了一个简单的演示项目 。 如果您按照自述文件中的说明进行操作并部署这些功能,则Lambda函数的所有日志都将传递到Logz.io。

翻译自: https://www.freecodecamp.org/news/how-to-implement-log-aggregation-for-aws-lambda-ca714bf02f48/

aws lambda

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值