An error occurred while calling o352.pyWriteDynamicFrame. Job 1 cancelled because SparkContext was shut down caused by threshold for consecutive task creation reached

0

Hi, I have a Glue job script that ingests tables from a Postgres database to AWS Catalog database. Here are the steps of the ingestion:

  1. Read postgres tables using SparkDF
  2. Convert SparkDF to DynamicDF
  3. Write the DF directly to a table using DynamicDF's sink.writeFrame()

We set the "Maximum concurrency" to 8 for this job. We have another Glue job running as a workflow which triggers this job 8 times to ingest 8 tables simultaneously with different parameters. The total number of DPUs of the 8 concurrent job runs is around 100. Sometimes, the jobs ran successfully. But sometimes, some of the jobs succeeded but some failed with the following error:

An error occurred while calling o352.pyWriteDynamicFrame. Job 1 cancelled because SparkContext was shut down caused by threshold for consecutive task creation reached

The above error message indicating the job failed while calling o352.pyWriteDynamicFrame. But it also happened while calling o93.purgeS3Path. So, I don't think it's related to a specific function in the job and I think it's more likely related to the job configs. I couldn't find any answer on this online. I also checked our service quota and don't think the jobs exceed any limitations, like the maximum number of concurrent running DPUs, maximum number of concurrent job runs, etc. Do you have any suggestions on why this happens and how to fix it? Should I set the "Maximum concurrency" to a higher number, like 16, for the job?

已提问 2 年前404 查看次数
没有答案

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则