How do I resolve the "HIVE_CANNOT_OPEN_SPLIT: Error opening Hive split s3://awsdoc-example-bucket/: Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down;" error in Athena?

6 minutos de lectura
0

My Amazon Athena query fails with one of the following errors: "HIVE_CANNOT_OPEN_SPLIT: Error opening Hive split s3://awsdoc-example-bucket/date=2020-05-29/ingest_date=2020-04-25/part-00000.snappy.parquet (offset=0, length=18614): Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down;" -or- "Unknown Failure (status code = 1003, java.sql.SQLException: [Simba]AthenaJDBC An error has been thrown from the AWS Athena client.HIVE_CANNOT_OPEN_SPLIT: Error opening Hive split s3://awsdoc-example-bucket/date=2020-05-29/ingest_date=2020-04-25/part-00000.snappy.parquet (offset=0, length=18614): Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down;"

Short description

An Amazon Simple Storage Service (Amazon S3) bucket can handle 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. These errors occur when this request threshold is exceeded. This limit is a combined limit across all users and services for an account.

By default, Amazon S3 scales automatically to support very high request rates. When your request rate scales, your S3 bucket is automatically partitioned to support higher request rates. However, if the request threshold is exceeded, then you receive 5xx errors asking you to slow down or try later.

For example, the prefix s3://my-athena-bucket/month=jan/ can support only 3,500 PUT/COPY/POST/DELETE requests per second or 5,500 GET/HEAD requests per second. If you have 10,000 files inside this prefix, and you run an Athena query on this prefix, then you get the 503 Slow Down error. This is because Athena tries to read all the 10,000 files on the prefix at the same time using the GET/HEAD requests, but the prefix can support only up to 5,500 GET/HEAD requests per second. This can cause your S3 requests to get throttled and result in 503 Slow Down error.

Resolution

Use one or more of the following methods to prevent request throttling:

Distribute S3 objects and requests among multiple prefixes

Partitioning your data can help distribute the objects and requests among multiple prefixes. Avoid storing many files under a single S3 prefix. Consider using multiple prefixes so that you can distribute the S3 objects among these prefixes. By partitioning your data, you can reduce the amount of data scanned by each query. For more information, see Partitioning data.

For example, instead of storing all the files under s3://my-athena-bucket/my-athena-data-files, partition the data and store them under the following individual prefixes:

s3://my-athena-bucket/jan

s3://my-athena-bucket/feb

s3://my-athena-bucket/mar

The data in these files can be further partitioned to increase the distribution of objects (Example: s3://my-athena-bucket/jan/01).

For more information on deciding your Athena partition folder structure, see Amazon S3 performance tips & tricks.

Reduce the number of files in each prefix

You might get this error when you query an S3 bucket with a large number of small objects. For example, if there is one 100 MB file in an S3 bucket, then Athena must make 1 GET request to read the file. However, if there are 1,000 files that are each 100 KB, then Athena must make 1,000 GET requests to read the same 100 MB of data. This results in the requests exceeding the S3 request limits.

To reduce the number of Amazon S3 requests, reduce the number of files. For example, use the S3DistCp tool to merge a large number of small files (less than 128 MB) into a smaller number of large files. For more information, see Top 10 performance tuning tips for Amazon Athena, and review the 4. Optimize file sizes section.

Example:

s3-dist-cp --src=s3://my_athena_bucket_source/smallfiles/ --dest=s3://my_athena_bucket_target/largefiles/ --groupBy='.*(.csv)'

Be sure to replace the following in the above command:

  • my_athena_bucket_source with the source S3 bucket where the small files exist.
  • my_athena_bucket_target with the destination S3 bucket where the output will be stored.

You can use the groupBy option to aggregate small files into fewer large files of a size that you choose. This can help you optimize both query performance and cost.

Note: S3DistCp doesn't support concatenation for Parquet files. Use PySpark instead. For more information, see How can I concatenate Parquet files in Amazon EMR?

Check if versioning is enabled for your S3 bucket

When you delete objects from a version-enabled bucket, Amazon S3 inserts a delete market instead of removing the object permanently. If you have many files in your S3 bucket with delete markers, then you might get this error. When you run a query on a version-enabled bucket, Athena must check the different versions of each object. Then, Athena decides whether to include a particular object during query processing.

To resolve this error, consider removing the delete markers from your S3 bucket. You can remove the delete markers by doing either of the following:

Check if other applications are using the same S3 prefix

Use the Amazon CloudWatch 5xxErrors metric and S3 server access logs to check if other applications, such as Hive on EMR, Spark, or AWS Glue, were using the same S3 prefix when you ran the Athena query. Multiple applications trying to read the data from the same S3 prefix can result in the requests getting throttled and queries failing with the Slow Down error. Avoid scheduling applications that access the same prefix at the same time. Also, use different S3 prefixes for the Athena data source and application data source.

You can create a CloudWatch metrics configuration for all objects in your S3 bucket. Use these metrics to monitor the API call rate metrics for a specific prefix at a certain point of time. Enabling S3 request metrics for a prefix can help in understanding the overall API hit rate for a prefix at a certain point of time. You can use this information together with the S3 server access logs to find which application was using the API call for the prefix.


Related information

How can I increase Amazon S3 request limits to avoid throttling on my Amazon S3 bucket?

Troubleshooting in Athena

Performance tuning in Athena

OFICIAL DE AWS
OFICIAL DE AWSActualizada hace 2 años