Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. You can work with customer support to pre partition the bucket based on your folder naming structure. Last time we did it we had to create a new bucket & work with support to pre partition it & copy data from the old bucket to new one.
Hi have a look at https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/ to troubleshoot HTTP 500 or 503 error from Amazon S3.
Also have a look at:
If there is a fast spike in the request rate for objects in a prefix, Amazon S3 might return 503 Slow Down errors while it scales in the background to handle the increased request rate. To avoid these errors, you can configure your application to gradually increase the request rate and retry failed requests using an exponential backoff algorithm .
Receiving S3 503 slow down responsesAccepted Answerasked 16 days ago
Much slow down of responses of s3select requests several times todayasked 2 years ago
Application Load Balancer throws 503 in a post requestasked 4 months ago
SES – email receiving + S3 action + encrypted S3 bucket = FAILasked 6 months ago
Receiving limits on AWS WorkMailasked 3 months ago
503 Service Temporarily UnavailableAccepted Answerasked 8 months ago
S3 PutObject rate limit reachedasked 3 months ago
503 error in our databaseasked 2 months ago
EMR Hive read/write performance issues when using S3 as storage layerasked 5 months ago
EMRFS and S3 503 slow down responsesasked a year ago