Your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. You can work with customer support to pre partition the bucket based on your folder naming structure. Last time we did it we had to create a new bucket & work with support to pre partition it & copy data from the old bucket to new one.
Hi have a look at https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/ to troubleshoot HTTP 500 or 503 error from Amazon S3.
Also have a look at:
If there is a fast spike in the request rate for objects in a prefix, Amazon S3 might return 503 Slow Down errors while it scales in the background to handle the increased request rate. To avoid these errors, you can configure your application to gradually increase the request rate and retry failed requests using an exponential backoff algorithm .
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago