S3 performance problems: How can i achieve a s3 request rate above the limit of 3500 PUT per second with multiple prefixes?

1

According to the official document, I try to scale write operations by writing to multiple prefixes, i got nothing, but the same limits(3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second). The official document seems misleading or incorrect.

Q: How can i achieve a s3 request rate above the limit of 3500 PUT per second? Or, how can i make the prefix partitioned ?

PS:I got a solution in the link , but how to "gradually scale up" as described in it ?

feita há 2 anos394 visualizações
1 Resposta
0

Throughput on S3 is based on the prefix of the object. The prefix is the initial part of the object name, so for an object with the name upload/data/incoming/file.txt, the prefix is upload/data/incoming/. Each prefix will receive upto the limits you list above, so to increase the 'bucket' throughput, ensure you use unique prefixes.

One thing to remember is that whilst we often think of the components in the prefix as folders, they're not really, so upload is a different prefix to upload/data.

profile picture
respondido há um ano

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas