An error occurred while calling o352.pyWriteDynamicFrame. Job 1 cancelled because SparkContext was shut down caused by threshold for consecutive task creation reached

0

Hi, I have a Glue job script that ingests tables from a Postgres database to AWS Catalog database. Here are the steps of the ingestion:

  1. Read postgres tables using SparkDF
  2. Convert SparkDF to DynamicDF
  3. Write the DF directly to a table using DynamicDF's sink.writeFrame()

We set the "Maximum concurrency" to 8 for this job. We have another Glue job running as a workflow which triggers this job 8 times to ingest 8 tables simultaneously with different parameters. The total number of DPUs of the 8 concurrent job runs is around 100. Sometimes, the jobs ran successfully. But sometimes, some of the jobs succeeded but some failed with the following error:

An error occurred while calling o352.pyWriteDynamicFrame. Job 1 cancelled because SparkContext was shut down caused by threshold for consecutive task creation reached

The above error message indicating the job failed while calling o352.pyWriteDynamicFrame. But it also happened while calling o93.purgeS3Path. So, I don't think it's related to a specific function in the job and I think it's more likely related to the job configs. I couldn't find any answer on this online. I also checked our service quota and don't think the jobs exceed any limitations, like the maximum number of concurrent running DPUs, maximum number of concurrent job runs, etc. Do you have any suggestions on why this happens and how to fix it? Should I set the "Maximum concurrency" to a higher number, like 16, for the job?

feita há 2 anos391 visualizações
Sem respostas

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas