How do I troubleshoot a HTTP 500 or 503 error from Amazon S3?

4 minute read
0

When I make a request to Amazon Simple Storage Service (Amazon S3), Amazon S3 returns a 5xx status error. How do I troubleshoot these errors?

Short description

Amazon S3 can return one of the following 5xx status errors:

  • AmazonS3Exception: Internal Error (Service: Amazon S3; Status Code: 500; Error Code: 500 Internal Error; Request ID: A4DBBEXAMPLE2C4D)
  • AmazonS3Exception: Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down; Request ID: A4DBBEXAMPLE2C4D)

The error code 500 Internal Error indicates that Amazon S3 can't handle the request at that time. The error code 503 Slow Down typically indicates that the number of requests to your S3 bucket is very high. For example, you can send 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in an S3 bucket. However, in some cases, Amazon S3 can return a 503 Slow Down response if your requests exceed the amount of bandwidth available for cross-Region copying.

Because Amazon S3 is a distributed service, a very small percentage of 5xx errors is expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can be retried. This means that it's a best practice to have a fault-tolerance mechanism or to implement retry logic for any applications making requests to Amazon S3. By doing so, S3 can recover from these errors.

To resolve or avoid 5xx status errors, consider the following approaches:

  • Use a retry mechanism in the application making requests.
  • Configure your application to increase request rates gradually.
  • Distribute objects across multiple prefixes.
  • Monitor the number of 5xx error responses.

Note: Amazon S3 doesn't assign additional resources for each new prefix. It automatically scales based on call patterns. As the request rate increases, Amazon S3 optimizes dynamically for the new request rate.

Resolution

Use a retry mechanism in the application making requests

Because of the distributed nature of Amazon S3, requests that return 500 or 503 errors can be retried. It's a best practice to build retry logic into applications that make requests to Amazon S3.

All AWS SDKs have a built-in retry mechanism with an algorithm that uses exponential backoff. This algorithm implements increasingly longer wait times between retries for consecutive error responses. Most exponential backoff algorithms use jitter (randomized delay) to prevent successive collisions. For more information, see Error retries and exponential backoff in AWS.

Configure your application to gradually increase request rates

To avoid the 503 Slow Down error, configure your application to start with a lower request rate (transactions per second). Then, increase the application's request rate exponentially. Amazon S3 automatically scales to handle a higher request rate.

Distribute objects across multiple prefixes

The request rates described in performance guidelines and design patterns apply per prefix in an S3 bucket. To set up your bucket to handle overall higher request rates and to avoid 503 Slow Down errors, you can distribute objects across multiple prefixes. For example, if you're using your S3 bucket to store images and videos, you can distribute the files into two prefixes similar to the following:

  • mybucket/images
  • mybucket/videos

If the request rate on the prefixes increases gradually, Amazon S3 scales up to handle requests for each of the two prefixes. S3 will scale up to handle 3,500 PUT/POST/DELETE or 5,500 GET requests per second. As a result, the overall request rate handled by the bucket doubles.

Monitor the number of 5xx status error responses

To monitor the number of 5xx status error responses that you're getting, you can use one of these options:

Additional troubleshooting

If you continue to see high a 5xx status error rates, contact AWS Support. Include the Amazon S3 request ID pairs for the requests that failed with a 5xx status error code.


Related information

Troubleshooting Amazon S3

AWS OFFICIAL
AWS OFFICIALUpdated 4 months ago