How can I scale my request rate to S3, to improve request rate performance?

1 minute read
0

I expect my Amazon Simple Storage Service (Amazon S3) bucket to get high request rates. What object key naming pattern should I use to get better performance?

Resolution

Amazon S3 automatically scales by dynamically optimizing performance in response to sustained high request rates. Your application can achieve 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. You can increase your read or write performance by using parallelization. If Amazon S3 is optimizing for a new request rate, you receive a temporary HTTP 503 request response until the optimization completes. Because Amazon S3 optimizes its prefixes for request rates, unique key naming patterns are no longer a best practice.

For more information about Amazon S3 performance optimization, see Performance guidelines for Amazon S3 and Performance design patterns for Amazon S3.


AWS OFFICIAL
AWS OFFICIALUpdated a year ago
2 Comments

The information presented on this page requires an update. It currently states, "Your application can achieve 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket," which might imply that each prefix can attain 3,500/5,500 TPS. The more accurate statement should be, "Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned Amazon S3 prefix" The crucial point here is the "per partitioned" aspect. Additionally, it's important to note that auto partitioning happens behind the scenes and involves S3 services monitors services that run automatically and the process can take from 30 to 60 minutes. Should the customer choose to do so, they can pre-petition using AWS support.

AWS
replied a month ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied a month ago