Can we create native delta lake tables only through crawler

0

hello,

AWS recently announced that Glue crawlers can now create native delta lake tables (last December, https://aws.amazon.com/blogs/big-data/introducing-native-delta-lake-table-support-with-aws-glue-crawlers/). We tested and it works fine. However, we would like to not use crawlers. Is this the only way to create a native delta lake table for now? Is this planned to allow this through the Glue console table creation screen?

As a side note, it looks like terraform is still missing a "CreateNativeDeltaTable" option in their latest provider (they have an open issue for that).

Thanks.

Cheers,

Fabrice

已提問 1 年前檢視次數 596 次
2 個答案
0

Thanks Gonzalo for your answer. Actually, my question is really about "delta" tables. I am able to create tables through the console, or the CLI. I am doing it multiple times :)

Through the console, I cannot select a "delta" format. I can select Avro, CSV, Parquet amongst others, but not Delta. When I create a replicate of the table created by the Delta source crawler, I get errors when querying the table (through Athena, or any other Glue job). When querying the mimicked table I get the error:

HIVE_CANNOT_OPEN_SPLIT: Error opening Hive split s3://.../part-00000-c3e64f54-ed8a-459c-8157-4235f25595b4.c000.snappy.parquet (offset=0, length=182912) using org.apache.hadoop.mapred.SequenceFileInputFormat: s3://com.diabeloop.dev.dta.lake/technical_logs/v3/curated_logs/resampled_information/environment=clinical/year=2022/part-00000-c3e64f54-ed8a-459c-8157-4235f25595b4.c000.snappy.parquet not a SequenceFile

Although querying the delta table created by the crawler (classification indeed equals to "delta") is working like a charm (both Athena and glue job).

已回答 1 年前
  • I see your point, you could use the wizard but then have to update the table using "ALTER TABLE" until it looks like the one from the crawler. You can do it Athena for instance, copying the DDL from another table

0

There is nothing stopping you creating the table yourself and doing the same the crawler does, as long as you enter the right parameters and configuration.
You can do it via the console, AWS CLI or boto3 or Athena.
It's easier if you get the table definition the crawler created and use it as a template, either using "aws glue get-table" or asking Athena to provide the DDL for an existing table.

profile pictureAWS
專家
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南