Glue ETL generating too many files in S3

0

hi team, can I ask why Glue is generating so many parquet files from my ETL job? Enter image description here Enter image description here Enter image description here Enter image description here

profile pictureAWS
エキスパート
質問済み 8ヶ月前338ビュー
2回答
0

The number of output files correlates to the number of partitions spark is processing in your pipeline. You could look at settings like spark.sql.shuffle.partitions or you could repartition your data frame to reduced partitions.

That being said, you might not want to do this as it will slow your job down (less partitions to parallelize on) and whatever is consuming these files might also be slowed. For example, if you are loading these parquet files into redshift it will certainly be better to have multiple files to parallelize loading. Most consumers will prefer multiple files for the same reason.

tjtoll
回答済み 8ヶ月前
profile pictureAWS
エキスパート
レビュー済み 8ヶ月前
0

Since you are using a visual job, before you save add the component "Autobalance Processing", in the optional box you can enter the number of files but it's better if you leave it empty, the component will optimize the performance while having a reasonable number of files.

profile pictureAWS
エキスパート
回答済み 8ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ