报错信息
error occurred
ErrorResponse(code = SlowDown, message = Please reduce your request, bucketName = public, objectName = null, resource = /public, requestId = 168104118916453B, hostId = 87cfbf66-292b-43a6-89c0-6e174727177d)
request={method=GET, url=http://minio.xxx.cloud/public?list-type=2&prefix=&max-keys=1000&encoding-type=url&delimiter=%2F, headers=Host: minio.xxx.cloud
Accept-Encoding: identity
User-Agent: MinIO (Windows 10; amd64) minio-java/8.0.3
Content-MD5: 1B2M2Y8AsgTpgAmY7PhCfg==
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date: 20210521T072333Z
Authorization: AWS4-HMAC-SHA256 Credential=*REDACTED*/20210521/us-east-1/s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=*REDACTED*
}
response={code=503, headers=Accept-Ranges: bytes
Content-Length: 271
Content-Security-Policy: block-all-mixed-content
Content-Type: application/xml
Date: Fri, 21 May 2021 07:23:34 GMT
Retry-After: 120
Server: MinIO
Vary: Origin
X-Amz-Request-Id: 168104118916453B
X-Xss-Protection: 1; mode=block
}
Stack Overflow 解答
- https://stackoverflow.com/questions/58433594/aws-s3-slowdown-please-reduce-your-request-rate
Ok, after a month with AWS support team with assistance from S3 engineering team the short answer is, randomize prefixes the old fashion way. The long answer, they indeed improved the performance of S3 as stated in the link in the original question, however, you always can bring the S3 to knees. The point is that internally they partition all objects sored in bucket, the partitioning works on the bucket prefixes and it organizes it in the lexicographical order of prefixes , so, no matter what, when you put a lot of files in different “folders” it still put the pressure on the outer part of prefix and then it tries to partition the outer part and this is the moment you will get the “SlowDown”. Well, you can exponentially back off with retries, but in my case, 5 minute backoff didnt make the trick, then the last resort is to prepend the prefix with some random token, which, ideally distributed evenly. Thats it. In less aggressive cases, the S3 engineering team can check your usage and manually partition your bucket (done on bucket level). Didnt work in our case.
- 简单说明:存储服务会根据prefix对同一个桶中的对象进行分区,如果存在太多prefix(把很多文件放在不同目录层级下),会对外层最外层的prefix分区造成性能压力。“Slow Down”报错就是这个时候产生的。