AWS Glue with CSV source data that changes over time

0

We have data that is being dumped into S3 every hour, with a basic Glue crawler running that enables us to query this data in Athena. The problem we're facing is that the source data is changing over time (columns added & removed) and the crawler doesn't seem to recognise this. Data from newer datasets is being put into columns that appear to positionally align with the earlier datasets, rather than being placed into columns based on the column name. (e.g. if the first dataset has columns A, C & D, and a new dataset has columns A, B, C & D, the new column B data shows up in column "C" in athena, and the new column C data shows up in column "D"). How can I fix this so that we can see all the columns with the data properly assigned to each column based on the header name of the columns?

已提问 2 年前240 查看次数
2 回答
3

Glue Crawler has an option to use for schema evolution scenario. To be able to add new columns whenever new columns are added into the files, you need to follow the following steps:

  • Make sure you choose "Crawl all folders"Enter image description here

  • Choose Add new columns or Update Table definition in Catalog under advanced options in "Set output and scheduling" Enter image description here

In case your column names are updated correctly, you may also need to create a classifier and link that classifier with this crawler. Here are the options you need to choose for the classifier: Enter image description here

profile pictureAWS
已回答 2 年前
AWS
专家
已审核 2 年前
  • I've tried with these exact settings, deleting the table and letting the crawler recreate it, but the columns when queried in Athena still have the wrong data in them.

1

Thanks for bring this scenario up. Is it possible for you to try to do perform few tests using parquet files and see if that works for your use case?

profile pictureAWS
专家
已回答 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则