An error occurred while calling o352.pyWriteDynamicFrame. Job 1 cancelled because SparkContext was shut down caused by threshold for consecutive task creation reached

0

Hi, I have a Glue job script that ingests tables from a Postgres database to AWS Catalog database. Here are the steps of the ingestion:

  1. Read postgres tables using SparkDF
  2. Convert SparkDF to DynamicDF
  3. Write the DF directly to a table using DynamicDF's sink.writeFrame()

We set the "Maximum concurrency" to 8 for this job. We have another Glue job running as a workflow which triggers this job 8 times to ingest 8 tables simultaneously with different parameters. The total number of DPUs of the 8 concurrent job runs is around 100. Sometimes, the jobs ran successfully. But sometimes, some of the jobs succeeded but some failed with the following error:

An error occurred while calling o352.pyWriteDynamicFrame. Job 1 cancelled because SparkContext was shut down caused by threshold for consecutive task creation reached

The above error message indicating the job failed while calling o352.pyWriteDynamicFrame. But it also happened while calling o93.purgeS3Path. So, I don't think it's related to a specific function in the job and I think it's more likely related to the job configs. I couldn't find any answer on this online. I also checked our service quota and don't think the jobs exceed any limitations, like the maximum number of concurrent running DPUs, maximum number of concurrent job runs, etc. Do you have any suggestions on why this happens and how to fix it? Should I set the "Maximum concurrency" to a higher number, like 16, for the job?

질문됨 2년 전391회 조회
답변 없음

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠