Glue job does not sort data without "Automatically scale the number of workers" check

0

I have created an ETL job executing processes below using AWS Glue Studio.

  1. Reading a data source from a Oracle database table through a table of Glue Data Catalog.
  2. Executing the SQL statement "select * from tableA order by col1".
  3. Repartitioning the DynamicFrame to 1 output.
  4. Writing the DynamicFrame to a csv file.

With this job, if I set "Automatically scale the number of workers" checked, the output data is sorted.

But I set the option unchecked, the output data is NOT sorted ("order by" clause doesn't work).

What is the cause of this phenomenon?

Thank you.

asked a year ago350 views
1 Answer
1
Accepted Answer

Hi ,

Small disclaimer: I do not have tested it, so my theory is not proven.

My understanding is that you are repartitioning the data to 1 partition (to have 1 file) using the repartition or coalesce command.

Now you have to consider that Spark run in a distributed cluster and each partition is managed by a different executor so in a normal execution when you are reading the data from Oracle even if it is sorted during the ingestion it may be split and re-merged after without conserving the sorting order. This is why without Autoscaling checked the data is not sorted.

Now , when Autoscaling is enabled you are telling Glue to start only the number of executors are actually needed. This combined with Spark Lazy evaluation and your repartition(1) could bring glue to start only one executor and thus read and write the data in your sorted order.

To validate it you could look at the Spark UI for the 2 jobs and see how many executor are running at anytime during the Job.

hope this helps,

AWS
EXPERT
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions