Glue Jobs on Large Data files

0

Hi Team,

I have a requirement to create ETL to transform data from 100's of data files (each has unique schema) into a common format CSV file. Source files are in S3 bucket folders. (each folder is unique dataset). Sometimes the requirement is to join multiple files in a folder and also write business logic in transformation. These files have millions of records.

I have tried Glue Cralwer and Glue jobs to create target files using limited data. My question is, How Glue will perform on millions of records and will it be cost effective? Can you please share information on this one?And also, I'm planning to orchetsrate each Glue crawler and Glue job from Step Functions. Is this correct approach? Thank you.

posta un anno fa247 visualizzazioni
1 Risposta
0
Risposta accettata

AWS Glue main focus is the kind of use case you describe and much larger datasets.
Obviously, depending on the complexity of your joins and transformation logic, you can run into challenges if you don't have previous experience using Apache Spark (which Glue ETL is based on). It's probably worth investing some time understanding how it works and how to monitor it.
The cost effectiveness depends on how efficient is your logic is and how you tune your configuration. Glue 4.0 provides a number of improvements and optimizations out of the box, that should really help you with that.
Crawlers are an optional convenience, you could read the csv files directly if you only need to read them once (if is not a table you to use for other purposes).
Step Functions require a bit learning but allow building advanced workflows, for simple workflows Glue provides triggers and visual workflows inside Glue.

profile pictureAWS
ESPERTO
con risposta un anno fa

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande