1 個回答
- 最新
- 最多得票
- 最多評論
0
I've tested a crawler using the same folder structure in S3 as mentioned.
Specified include path as: s3://my-datalake/projects/
Exclude pattern as: incremental_**/**
Using above exclude pattern ignores all files under folders named 'incremental_'. The only additional thing could be that existing crawlers have "UpdateBehavior" as "LOG" - so the already created tables are not being dropped. You could try updating it to "UPDATE_IN_DATABASE" - this will recreate the tables.
Reference - https://docs.aws.amazon.com/glue/latest/dg/define-crawler.html#crawler-data-stores-exclude
已回答 1 年前
相關內容
- AWS 官方已更新 3 年前
- AWS 官方已更新 2 年前
- AWS 官方已更新 10 個月前