내용으로 건너뛰기

S3 SLOW_DOWN Error and Partitioning Questions

0

We've been experiencing S3 SLOW_DOWN issues due to a recent feature release and we were wondering if we could get more information on how S3 partitions for performance. My current understanding is when we start uploading new objects to novel prefixes, they'll go into an existing partition. Once those are overloaded, S3 will create new partitions but that process can take a significant amount of time. Is there a way we can pre-partition S3 to manage load better? Our issues specifically come from Multipart Uploads. Does the CreateMultipartUpload, UploadPart and CompleteMultipart Upload count for three separate operations?

Thanks

P.S. This link is main source I've had for my information about S3 partitions: https://repost.aws/questions/QU0PaslnBmSt2topWnuyiZQg/s3-partitioned-prefixes

질문됨 일 년 전372회 조회
1개 답변
1

If you're doing multipart uploads, you're probably doing a much larger number of UploadPart operations for each object you upload -- if you weren't, using the multipart mechanism would just slow you down by requiring three API calls instead of just one PutObject. The number of UploadPart calls should be the file size divided by the chunk (part) size, rounded up to the next integer.

You can reduce the number of API calls for multipart uploads by increasing the chunk size. For example, the AWS CLI uses a default chunk size of 8 MiB, although automatically adjusted to a higher value to avoid exceeding the 10,000-part maximum for a multipart upload. For moderately-sized objects, simply increasing the chunk size from 8 MiB to 64 or 128 MiB, for example, would decrease the number of API calls proportionately. Since you're seeing the issue in practice, you might want to give that a go.

전문가
답변함 일 년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.