前端文件传到aws s3
Expedia Group Technology —软件 (EXPEDIA GROUP TECHNOLOGY — SOFTWARE)
In a single operation, you can upload up to 5GB into an AWS S3 object. The size of an object in S3 can be from a minimum of 0 bytes to a maximum of 5 terabytes, so, if you are looking to upload an object larger than 5 gigabytes, you need to use either multipart upload or split the file into logical chunks of up to 5GB and upload them manually as regular uploads. I will explore both options.
通过一次操作,您最多可以将5GB上载到AWS S3对象。 S3中对象的大小可以从最小0字节到最大5 TB,因此,如果您要上传大于5 GB的对象,则需要使用分段上传或将文件拆分为逻辑文件最高5GB的块,并以常规上传方式手动上传。 我将探讨这两种选择。
分段上传 (Multipart upload)
Performing a multipart upload requires a process of splitting the file into smaller files, uploading them using the CLI, and verifying them. The file manipulations are demonstrated on a UNIX-like system.
执行分段上传需要将文件拆分为较小的文件,然后使用CLI上载它们并进行验证的过程。 在类似UNIX的系统上演示了文件操作。
- Before you upload a file using the multipart upload process, we need to calculate its base64 MD5 checksum value: 在使用分段上传过程上传文件之前,我们需要计算其base64 MD5校验和值:
$ a3VKS0RazAmJUCO8ST90pQ==
2. Split the file into small files using the split
command:
2.使用split
命令将文件拆分为小文件:
Syntax
split [-bbyte_count[k|m]] [-lline_count] [file [name]]Option -b Create smaller files byte_countbytes in length.
`k' = kilobyte pieces
`m' = megabyte pieces. -l Create smaller files line_count lines in length.
Splitting the file into 4GB blocks:
将文件分成4GB的块:
$ $
-rw-r--r--@ 1 user1 staff 7827069512 Aug 26 16:20