- 最新
- 投票最多
- 评论最多
Hello,
I understand that you are seeing the below error in the CloudWatch logs for multiple Glue jobs since 14th August 2023:
ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver x.x.x.x:x disassociated! Shutting down.
This error could appear due to multiple reasons, one of them being insufficient memory. Other causes of the Driver becoming unresponsive are:
- Making a .collect() or .show() command which makes all executors send data to the driver, and all data is loaded on the driver. or
- You are reading a large number of small files and the driver is not able to "track" all the files.
You can refer to the following external article for workarounds to how to mitigate this error if they fit your use case. [1]
However, without inspecting the logs for this run id, we won’t be able to provide a conclusive root cause due to which this error is occurring. In order for us to troubleshoot further by checking the glue job details using the backend tools, please feel free to open a support case with AWS using the following link [2] with the sanitized script, the job run id, Spark UI logs for the job run and any additional information if required and we would be happy to help.
Thank you!
References:
相关内容
- AWS 官方已更新 1 年前
- AWS 官方已更新 3 年前
- AWS 官方已更新 1 年前
@Anyshka S, thank you a lot for your reply. I went through resource you refer to, I've tried increase driver memory, executor memory and memoryOverhead, but it didn't give any result.