S3 performance problems: How can i achieve a s3 request rate above the limit of 3500 PUT per second with multiple prefixes?


According to the official document, I try to scale write operations by writing to multiple prefixes, i got nothing, but the same limits(3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second). The official document seems misleading or incorrect.

Q: How can i achieve a s3 request rate above the limit of 3500 PUT per second? Or, how can i make the prefix partitioned ?

PS:I got a solution in the link , but how to "gradually scale up" as described in it ?

asked 2 years ago333 views
1 Answer

Throughput on S3 is based on the prefix of the object. The prefix is the initial part of the object name, so for an object with the name upload/data/incoming/file.txt, the prefix is upload/data/incoming/. Each prefix will receive upto the limits you list above, so to increase the 'bucket' throughput, ensure you use unique prefixes.

One thing to remember is that whilst we often think of the components in the prefix as folders, they're not really, so upload is a different prefix to upload/data.

profile picture
answered 10 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions