- Le plus récent
- Le plus de votes
- La plupart des commentaires
Hi ,
Small disclaimer: I do not have tested it, so my theory is not proven.
My understanding is that you are repartitioning the data to 1 partition (to have 1 file) using the repartition or coalesce command.
Now you have to consider that Spark run in a distributed cluster and each partition is managed by a different executor so in a normal execution when you are reading the data from Oracle even if it is sorted during the ingestion it may be split and re-merged after without conserving the sorting order. This is why without Autoscaling checked the data is not sorted.
Now , when Autoscaling is enabled you are telling Glue to start only the number of executors are actually needed. This combined with Spark Lazy evaluation and your repartition(1) could bring glue to start only one executor and thus read and write the data in your sorted order.
To validate it you could look at the Spark UI for the 2 jobs and see how many executor are running at anytime during the Job.
hope this helps,
Contenus pertinents
- demandé il y a 7 mois
- demandé il y a 15 jours
- demandé il y a 7 mois
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 3 ans
Thank you for your reply. Your explanation was a great help.