Schema incorrectly showing Data type of array in Glue Catalog when using Delta Lake table

0

I have a Delta Lake table saved in s3. I am running the following command:

spark.sql(""" CREATE EXTERNAL TABLE db.my_table USING DELTA LOCATION 's3://path/to/delta/table """)

Everything seems to work fine except when I look at the Schema in Glue Catalog it shows 1 field with column name of "col" and data type of "array". It should have two fields first_name and last_name that are both strings.

It populates correctly using a crawler but I have been asked to provide an alternative solution. How can this be done?

temp999
질문됨 4달 전346회 조회
2개 답변
1
수락된 답변

When creating the table using Spark SQL, although the Glue table may not correctly reflect the table schema, however, SQL queries on the table should work fine as the schema is referenced from the metadata present in the table’s S3 location.

If you would like the table schema to be populated in the Glue catalog table, you may consider creating the Delta Lake table using an Athena query. Athena infers the Delta Lake table metadata from the Delta Lake transaction log and synchronizes it with the Glue catalog. Please see the following document on how to create Delta Lake tables using Athena: https://docs.aws.amazon.com/athena/latest/ug/delta-lake-tables.html#delta-lake-tables-getting-started

Please note that there are no charges for DDL queries in Athena.

AWS
지원 엔지니어
답변함 4달 전
1

It is a known limitation of the library: https://github.com/delta-io/delta/issues/1679
As Davlish points there are alternatives so it shouldn't be a blocker

profile pictureAWS
전문가
답변함 4달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠