- Le plus récent
- Le plus de votes
- La plupart des commentaires
The number of output files correlates to the number of partitions spark is processing in your pipeline. You could look at settings like spark.sql.shuffle.partitions or you could repartition your data frame to reduced partitions.
That being said, you might not want to do this as it will slow your job down (less partitions to parallelize on) and whatever is consuming these files might also be slowed. For example, if you are loading these parquet files into redshift it will certainly be better to have multiple files to parallelize loading. Most consumers will prefer multiple files for the same reason.
Since you are using a visual job, before you save add the component "Autobalance Processing", in the optional box you can enter the number of files but it's better if you leave it empty, the component will optimize the performance while having a reasonable number of files.
Contenus pertinents
- demandé il y a un an
- demandé il y a un an
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a un an