Glue Jobs on Large Data files

0

Hi Team,

I have a requirement to create ETL to transform data from 100's of data files (each has unique schema) into a common format CSV file. Source files are in S3 bucket folders. (each folder is unique dataset). Sometimes the requirement is to join multiple files in a folder and also write business logic in transformation. These files have millions of records.

I have tried Glue Cralwer and Glue jobs to create target files using limited data. My question is, How Glue will perform on millions of records and will it be cost effective? Can you please share information on this one?And also, I'm planning to orchetsrate each Glue crawler and Glue job from Step Functions. Is this correct approach? Thank you.

demandé il y a un an247 vues
1 réponse
0
Réponse acceptée

AWS Glue main focus is the kind of use case you describe and much larger datasets.
Obviously, depending on the complexity of your joins and transformation logic, you can run into challenges if you don't have previous experience using Apache Spark (which Glue ETL is based on). It's probably worth investing some time understanding how it works and how to monitor it.
The cost effectiveness depends on how efficient is your logic is and how you tune your configuration. Glue 4.0 provides a number of improvements and optimizations out of the box, that should really help you with that.
Crawlers are an optional convenience, you could read the csv files directly if you only need to read them once (if is not a table you to use for other purposes).
Step Functions require a bit learning but allow building advanced workflows, for simple workflows Glue provides triggers and visual workflows inside Glue.

profile pictureAWS
EXPERT
répondu il y a un an

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions