I need to read S3 data, transform and put into Data Catalog. Should I be using a Crawler?

0

Files are uploaded every hour to an S3 bucket. I currently have a Glue ETL job reading the S3 bucket, transforming data and inserting into a Glue Data Catalog. I have seen examples where people have a Glue Crawler which reads the S3, writes data to Data Catalog table, and then an ETL job reads from a table transforms and then writes back to another table (or wherever it needs to go). Should I be using a Crawler? I don't see the need for it if I can just use the ETL job to go S3->Transform->Data Catalog. It would seem the ETL job supports bookmarking (init/commit) just like Crawlers do.

bfeeny
已提问 2 年前1854 查看次数
1 回答
1
已接受的回答

Hi,

AWS Glue Crawlers are used to automatically discover the schema of the data in Amazon S3 or other data sources. They also help in capturing schema evolution.

If your schema is fixed (do not change often), already known and you do not have issues creating your tables manually via the console or your code using the APIs, then you do not need to use them.

Consider also that the Crawler do have a cost, so cost optimization might be another reason to not use them if you are fine with self managing the schemas of your datasets.

for additional information on Crawlers, you can refer to this section of the AWS Glue Documentation.

hope this helps

AWS
专家
已回答 2 年前
  • As Fabrizio said correctly, You only need to run the AWS Glue Crawler again if your schema changes. Also, ETL job support bookmarking and its recommended to use when your data grows day by day and when your job runs, you don't want to perform ETL job operations all over your data again, with bookmarking it will start processing operation on new data (wont perform operation on a processed data).

    Read more at: https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则