Minio Prefix过多导致上传文件报错:code = SlowDown, message = Please reduce your request

报错信息

error occurred
ErrorResponse(code = SlowDown, message = Please reduce your request, bucketName = public, objectName = null, resource = /public, requestId = 168104118916453B, hostId = 87cfbf66-292b-43a6-89c0-6e174727177d)
request={method=GET, url=http://minio.xxx.cloud/public?list-type=2&prefix=&max-keys=1000&encoding-type=url&delimiter=%2F, headers=Host: minio.xxx.cloud
Accept-Encoding: identity
User-Agent: MinIO (Windows 10; amd64) minio-java/8.0.3
Content-MD5: 1B2M2Y8AsgTpgAmY7PhCfg==
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date: 20210521T072333Z
Authorization: AWS4-HMAC-SHA256 Credential=*REDACTED*/20210521/us-east-1/s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=*REDACTED*
}
response={code=503, headers=Accept-Ranges: bytes
Content-Length: 271
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 21 May 2021 07:23:34 GMT
Retry-After: 120
Server: MinIO
Vary: Origin
X-Amz-Request-Id: 168104118916453B
X-Xss-Protection: 1; mode=block
}

Stack Overflow 解答

  • https://stackoverflow.com/questions/58433594/aws-s3-slowdown-please-reduce-your-request-rate

Ok, after a month with AWS support team with assistance from S3 engineering team the short answer is, randomize prefixes the old fashion way. The long answer, they indeed improved the performance of S3 as stated in the link in the original question, however, you always can bring the S3 to knees. The point is that internally they partition all objects sored in bucket, the partitioning works on the bucket prefixes and it organizes it in the lexicographical order of prefixes , so, no matter what, when you put a lot of files in different “folders” it still put the pressure on the outer part of prefix and then it tries to partition the outer part and this is the moment you will get the “SlowDown”. Well, you can exponentially back off with retries, but in my case, 5 minute backoff didnt make the trick, then the last resort is to prepend the prefix with some random token, which, ideally distributed evenly. Thats it. In less aggressive cases, the S3 engineering team can check your usage and manually partition your bucket (done on bucket level). Didnt work in our case.

  • 简单说明:存储服务会根据prefix对同一个桶中的对象进行分区,如果存在太多prefix(把很多文件放在不同目录层级下),会对外层最外层的prefix分区造成性能压力。“Slow Down”报错就是这个时候产生的。
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值