AWS EMR [System Error] Fail to delete the temp folder

0

In AWS EMR, I encountered the following error message when running a pyspark job, which ran successfully on my local machine.

[System Error] Fail to delete the temp folder

Is there a way to troubleshoot this? Is this a permissions issue with the temp folder access by EMR, accessible across all jobs?

asked 4 months ago207 views
1 Answer
4
Accepted Answer

Hello,

Yes, it looks like either the permission issue or the tmp files might be in use.

  1. Please check if you try opening the pyspark shell as hadoop user in EMR or use sudo pyspark in hadoop user
  2. Try checking the spark-shell working without issue instead of pyspark shell.
  3. Include spark.local.dir to different local directory in primary node to see if this fix the issue.
  4. Restart the spark service(sudo systemctl restart spark-history-server.service
  5. Set the log level to debug rootLogger.level = debug in log4j file /etc/spark/conf/log4j2.properties and retry the pyspark shell. This might give more insight to understand the issue.
AWS
SUPPORT ENGINEER
answered 4 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions