Este contenido no está disponible en el idioma seleccionado
Trabajamos constantemente para que el contenido esté disponible en el idioma seleccionado. Gracias por tu paciencia.
How can I resolve the "HIVE_METASTORE_ERROR" error when I query a table in Amazon Athena?
4 minutos de lectura
0
I get the error "HIVE_METASTORE_ERROR" when I query my Amazon Athena table.
Short description
See the following types of "HIVE_METASTORE_ERROR" errors and their causes:
- HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: Error: : expected at the position 1234 of struct<test_column> but '/' is found. (Service: null; Status Code: 0; Error Code: null; Request ID: null): A column name in the queried table includes a special character, or a column name in the partition schema includes a special character. Athena doesn't support special characters other than underscore. For more information, see Names for tables, databases, and columns.
- HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: Required Table Storage Descriptor is not populated:
The StorageDescriptor parameter contains information about the physical storage of the table. You get this error if one or more partitions on the table don't have the partition location set because of corrupt partitions. - HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: java.io.IOException: Response payload size (11112222 bytes) exceeded maximum allowed payload size (6291556 bytes): You use an AWS Lambda function to run Athena queries against a cross-account AWS Glue Data Catalog or an external Hive metastore. However, Lambda has an invocation payload limit of 6 MB. You get this error when the size of the object that's returned from Lambda is more than 6 MB. The Lambda payload limit is a hard limit and can't be increased. For more information, see Lambda quotas.
Resolution
HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: Error: : expected at the position 1234 of struct<test_column> but '/' is found. (Service: null; Status Code: 0; Error Code: null; Request ID: null)
To resolve this error, complete the following steps:
- To replace the special character in the column name with an underscore, run the following custom script on your data:
import re string = open('a.txt').read() new_str = re.sub('/', '_', string) open('b.txt', 'w').write(new_str)
- Edit the existing schema of the table from the AWS Glue console, and then replace '/' with any other character that's supported by Athena.
HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: Required Table Storage Descriptor is not populated
To resolve this error, choose one or more of the following solutions:
- If your table is already partitioned, and the data is loaded in Amazon Simple Storage Service (Amazon S3) Hive partition format, then load the partitions. Run a command that's similar to the following example:
Note: Be sure to replace doc_example_table with the name of your table.
MSCK REPAIR TABLE doc_example_table
- If the MSCK REPAIR TABLE command doesn’t resolve the issue, then drop the table and create a new table with the same definition. Then, run the MSCK REPAIR TABLE command on the new table.
- Create a separate folder in the Amazon S3 bucket, and then move the data files that you want to query into that folder. Create an AWS Glue crawler that points to that folder instead of the bucket.
HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: java.io.IOException: Response payload size (11112222 bytes) exceeded maximum allowed payload size (6291556 bytes)
To resolve this error, choose one or more of the following solutions:
- Upload the Lambda function's response payload as an object into an Amazon S3 bucket. Then, include this object as payload in the Lambda function response. For information on generating a presigned URL for your object, see Sharing objects using presigned URLs.
- If your table is partitioned, and your use case permits, query only the specific partition.
- When you create the Lambda function, specify the spill location in Amazon S3. Responses that are larger than the threshold spill into the specified S3 location.
Related information

OFICIAL DE AWSActualizada hace 2 meses
Sin comentarios
Contenido relevante
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace 4 meses
- OFICIAL DE AWSActualizada hace 10 meses