跳至内容

S3 SLOW_DOWN Error and Partitioning Questions

0

We've been experiencing S3 SLOW_DOWN issues due to a recent feature release and we were wondering if we could get more information on how S3 partitions for performance. My current understanding is when we start uploading new objects to novel prefixes, they'll go into an existing partition. Once those are overloaded, S3 will create new partitions but that process can take a significant amount of time. Is there a way we can pre-partition S3 to manage load better? Our issues specifically come from Multipart Uploads. Does the CreateMultipartUpload, UploadPart and CompleteMultipart Upload count for three separate operations?

Thanks

P.S. This link is main source I've had for my information about S3 partitions: https://repost.aws/questions/QU0PaslnBmSt2topWnuyiZQg/s3-partitioned-prefixes

已提问 1 年前372 查看次数
1 回答
1

If you're doing multipart uploads, you're probably doing a much larger number of UploadPart operations for each object you upload -- if you weren't, using the multipart mechanism would just slow you down by requiring three API calls instead of just one PutObject. The number of UploadPart calls should be the file size divided by the chunk (part) size, rounded up to the next integer.

You can reduce the number of API calls for multipart uploads by increasing the chunk size. For example, the AWS CLI uses a default chunk size of 8 MiB, although automatically adjusted to a higher value to avoid exceeding the 10,000-part maximum for a multipart upload. For moderately-sized objects, simply increasing the chunk size from 8 MiB to 64 or 128 MiB, for example, would decrease the number of API calls proportionately. Since you're seeing the issue in practice, you might want to give that a go.

专家
已回答 1 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。