AWS Glue CoarseGrainedExecutorBackend ERRORS

0

Hi, I have a list of Glue jobs, they are up and running. Starting from 2023/08/14 I'm having a lot of errors from CoarseGrainedExecutorBackend like this: ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver x.x.x.x:x disassociated! Shutting down.

These errors greatly affect monitoring (no logs available after driver was disassociated). Changing worker type from G.1 to G.2 didn't help.

Any ideas how to fix it?

asked 8 months ago933 views
1 Answer
0

Hello,

I understand that you are seeing the below error in the CloudWatch logs for multiple Glue jobs since 14th August 2023:

ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver x.x.x.x:x disassociated! Shutting down.

This error could appear due to multiple reasons, one of them being insufficient memory. Other causes of the Driver becoming unresponsive are:

  1. Making a .collect() or .show() command which makes all executors send data to the driver, and all data is loaded on the driver. or
  2. You are reading a large number of small files and the driver is not able to "track" all the files.

You can refer to the following external article for workarounds to how to mitigate this error if they fit your use case. [1]

However, without inspecting the logs for this run id, we won’t be able to provide a conclusive root cause due to which this error is occurring. In order for us to troubleshoot further by checking the glue job details using the backend tools, please feel free to open a support case with AWS using the following link [2] with the sanitized script, the job run id, Spark UI logs for the job run and any additional information if required and we would be happy to help.

Thank you!

References:

  1. https://indatawetrust.blog/2017/09/11/spark-error-coarsegrainedexecutorbackend-driver-disassociated-shutting-down/
  2. https://support.console.aws.amazon.com/support/home#/case/create
AWS
SUPPORT ENGINEER
answered 8 months ago
  • @Anyshka S, thank you a lot for your reply. I went through resource you refer to, I've tried increase driver memory, executor memory and memoryOverhead, but it didn't give any result.

    1. no, not my case, I have some jobs without .collect() or .show(), which constantly ended with this error.
    2. also not my case

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions