跳至內容

Specify crawler to crawl from specific files?

0

I am keeping track of some data, based on the date on S3. The files are stored in directories like this: year=yyyy/month=mm/day=dd and inside this directory, there are multiple csv files. I want a crawler to only crawl from one file, for a whole month or a year. The files are saved in this format: regionA_yyyy_mm_dd.csv, regionB_yyyy_mm_dd.csv. I was thinking about if there is a way to specify the name of the file, like regionA, to crawler so that it crawls data only from region A. Is there a way to do this?

已提問 1 年前檢視次數 1190 次
1 個回答
2
已接受的答案

You can configure an AWS Glue Crawler to selectively crawl specific files from your S3 bucket using include patterns. By specifying the pattern regionA.csv, for example, you instruct the crawler to only consider files containing regionA in their names. This approach allows you to focus the crawling process on the desired data subset, improving efficiency and reducing processing time. Alternatively, you can create a table in the AWS Glue Data Catalog for the specific files you're interested in and configure the crawler to update that table. Additionally, you have the flexibility to automate this process using the AWS CLI or Boto3, providing you with greater control and customization options.

專家
已回答 1 年前
專家
已審閱 1 年前
專家
已審閱 1 年前
  • Is there a way for the crawler to generate multiple metadata? For example, is there a way a crawler can generate separate metatables for regionA, regionB, regionC, etc? Or can it only be done through assigning each crawler for each region?

  • In AWS Glue, a single crawler can generate metadata for multiple regions by using a combination of custom classifiers, filters, and partitioning strategies. if it is not too urgent i can come up with something before tomorrow

  • That would be awesome! Also, where can I use the 'patterns' so that I can specify the name of the files to crawl from?

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。