Ir para o conteúdo

S3 SLOW_DOWN Error and Partitioning Questions

0

We've been experiencing S3 SLOW_DOWN issues due to a recent feature release and we were wondering if we could get more information on how S3 partitions for performance. My current understanding is when we start uploading new objects to novel prefixes, they'll go into an existing partition. Once those are overloaded, S3 will create new partitions but that process can take a significant amount of time. Is there a way we can pre-partition S3 to manage load better? Our issues specifically come from Multipart Uploads. Does the CreateMultipartUpload, UploadPart and CompleteMultipart Upload count for three separate operations?

Thanks

P.S. This link is main source I've had for my information about S3 partitions: https://repost.aws/questions/QU0PaslnBmSt2topWnuyiZQg/s3-partitioned-prefixes

feita há um ano372 visualizações
1 Resposta
1

If you're doing multipart uploads, you're probably doing a much larger number of UploadPart operations for each object you upload -- if you weren't, using the multipart mechanism would just slow you down by requiring three API calls instead of just one PutObject. The number of UploadPart calls should be the file size divided by the chunk (part) size, rounded up to the next integer.

You can reduce the number of API calls for multipart uploads by increasing the chunk size. For example, the AWS CLI uses a default chunk size of 8 MiB, although automatically adjusted to a higher value to avoid exceeding the 10,000-part maximum for a multipart upload. For moderately-sized objects, simply increasing the chunk size from 8 MiB to 64 or 128 MiB, for example, would decrease the number of API calls proportionately. Since you're seeing the issue in practice, you might want to give that a go.

ESPECIALISTA
respondido há um ano

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.