How can I configure an AWS Glue ETL job to output larger files?
I want to configure an AWS Glue ETL job to output a small number of large files instead of a large number of small files.
Use any of the following methods to reduce the number of output files for an AWS Glue ETL job.
Increase the value of the groupSize parameter
Grouping is automatically enabled when you use dynamic frames and when the Amazon Simple Storage Service (Amazon S3) dataset has more than 50,000 files. Increase this value to create fewer, larger output files. For more information, see Reading input files in larger groups.
In the following example, groupSize is set to 10485760 bytes (10 MB):
coalesce() performs Spark data shuffles, which can significantly increase the job run time.
If you specify a small number of partitions, then the job might fail. For example, if you run coalesce(1), Spark tries to put all data into a single partition. This can lead to disk space issues.
You can also use repartition() to decrease the number of partitions. However, repartition() reshuffles all data. The coalesce() operation uses existing partitions to minimize the number of data shuffles. For more information on using repartition(), see Spark Repartition on the eduCBA website.
Use the Spark write() method to control the maximum record count per file. The following example sets the maximum record count to 20:
Note: The maxRecordsPerFile option acts only as an upper limit for the record count per file. The record count of each file will be less than or equal to the number specified. If the value is zero or negative, then there is no limit.