Receiving S3 503 slow down responses

0

We have an application that stores a large volume of data in one S3 bucket. Lately, we have started receiving 503 slow-down error messages. Reading the docs, it seems like the API is related to the Prefix and Partitions that S3 creates internally based on the Folder structure.

Our current structure is /uploads/receipt_image_v2/<UUID>/<FILENAME> ... I wonder if we should do something different? We have millions of UUID and 1-2 images per UUID in the S3 bucket.

I can't find any way to see how many partitions there are for our S3 bucket.

asked a year ago6026 views
2 Answers
2
Accepted Answer

Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. You can work with customer support to pre partition the bucket based on your folder naming structure. Last time we did it we had to create a new bucket & work with support to pre partition it & copy data from the old bucket to new one.

AWS
venky81
answered a year ago
profile pictureAWS
EXPERT
Chris_G
reviewed a year ago
  • Thanks, I wasn't aware that this is something that customer support could do!

    Though our bucket is 100s of TBs in size, so I doubt transferring data to a new bucket will be feasible.

  • Also another very important thing: Before you go down the rabbit hole just find out what operation is causing the 503. Most times I find the issue is List operations. If it is then I would suggest to configure an inventory report on the bucket to deliver to another bucket & use that inventory report instead of list files.

0

Hi have a look at https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/ to troubleshoot HTTP 500 or 503 error from Amazon S3.

Also have a look at:

https://aws.amazon.com/premiumsupport/knowledge-center/s3-503-within-request-rate-prefix/

If there is a fast spike in the request rate for objects in a prefix, Amazon S3 might return 503 Slow Down errors while it scales in the background to handle the increased request rate. To avoid these errors, you can configure your application to gradually increase the request rate and retry failed requests using an exponential backoff algorithm [1].

[1] https://docs.aws.amazon.com/general/latest/gr/api-retries.html

AWS
Nuno_Q
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions