aws lambda使用
Despite having a runtime limit of 15 minutes, AWS Lambda can still be used to process large files. Files formats such as CSV or newline delimited JSON which can be read iteratively or line by line can be processed with this method.
尽管运行时限制为15分钟,但AWS Lambda仍可用于处理大文件。 可以迭代或逐行读取的文件格式(例如CSV或换行符分隔的JSON)可以使用此方法进行处理。
Lambda is a good option if you want a serverless architecture and have files that are large but still within reasonable limits. We’ll show how we can write a lambda function that can process a large csv file in the following manner. The function will be capable of handling data sizes exceeding both its memory and runtime limits.
如果您需要无服务器架构并且文件很大但仍在合理范围内,则Lambda是一个不错的选择。 我们将展示如何编写可以以以下方式处理大型csv文件的lambda函数。 该功能将能够处理超出其内存和运行时限制的数据大小。
The main approach is as follows:
主要方法如下:
- Read and process the csv file row by row until nearing timeout. 逐行读取并处理csv文件,直到接近超时。
- Trigger a new lambda asynchronously that will pick up from where the previous lambda stopped processing. 异步触发一个新的lambda,它将从先前的lambda停止处理的地方开始提取。
We will define the following event which will be used to trigger the lambda function. The use of bucket_name
and object_key
is necessary to identify the S3 object that will be processed, the use of the offset
and fieldnames
wi