Glue job does not sort data without "Automatically scale the number of workers" check

0

I have created an ETL job executing processes below using AWS Glue Studio.

  1. Reading a data source from a Oracle database table through a table of Glue Data Catalog.
  2. Executing the SQL statement "select * from tableA order by col1".
  3. Repartitioning the DynamicFrame to 1 output.
  4. Writing the DynamicFrame to a csv file.

With this job, if I set "Automatically scale the number of workers" checked, the output data is sorted.

But I set the option unchecked, the output data is NOT sorted ("order by" clause doesn't work).

What is the cause of this phenomenon?

Thank you.

已提問 1 年前檢視次數 363 次
1 個回答
1
已接受的答案

Hi ,

Small disclaimer: I do not have tested it, so my theory is not proven.

My understanding is that you are repartitioning the data to 1 partition (to have 1 file) using the repartition or coalesce command.

Now you have to consider that Spark run in a distributed cluster and each partition is managed by a different executor so in a normal execution when you are reading the data from Oracle even if it is sorted during the ingestion it may be split and re-merged after without conserving the sorting order. This is why without Autoscaling checked the data is not sorted.

Now , when Autoscaling is enabled you are telling Glue to start only the number of executors are actually needed. This combined with Spark Lazy evaluation and your repartition(1) could bring glue to start only one executor and thus read and write the data in your sorted order.

To validate it you could look at the Spark UI for the 2 jobs and see how many executor are running at anytime during the Job.

hope this helps,

AWS
專家
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南