- Newest
- Most votes
- Most comments
Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. You can work with customer support to pre partition the bucket based on your folder naming structure. Last time we did it we had to create a new bucket & work with support to pre partition it & copy data from the old bucket to new one.
Hi have a look at https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/ to troubleshoot HTTP 500 or 503 error from Amazon S3.
Also have a look at:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-503-within-request-rate-prefix/
If there is a fast spike in the request rate for objects in a prefix, Amazon S3 might return 503 Slow Down errors while it scales in the background to handle the increased request rate. To avoid these errors, you can configure your application to gradually increase the request rate and retry failed requests using an exponential backoff algorithm [1].
[1] https://docs.aws.amazon.com/general/latest/gr/api-retries.html
Relevant content
- asked a year ago
- asked 3 years ago
- asked a year ago
- asked 4 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
Thanks, I wasn't aware that this is something that customer support could do!
Though our bucket is 100s of TBs in size, so I doubt transferring data to a new bucket will be feasible.
Also another very important thing: Before you go down the rabbit hole just find out what operation is causing the 503. Most times I find the issue is List operations. If it is then I would suggest to configure an inventory report on the bucket to deliver to another bucket & use that inventory report instead of list files.