How do I resolve "Container killed on request. Exit code is 137" errors in Spark on Amazon EMR?

4 minuti di lettura
0

My Apache Spark job on Amazon EMR fails with a "Container killed on request" stage failure: Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 3.0 failed 4 times, most recent failure: Lost task 2.3 in stage 3.0 (TID 23, ip-xxx-xxx-xx-xxx.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Container marked as failed: container_1516900607498_6585_01_000008 on host: ip-xxx-xxx-xx-xxx.compute.internal. Exit status: 137. Diagnostics: Container killed on request. Exit code is 137

Short description

When a container (Spark executor) runs out of memory, YARN automatically kills it. This causes the "Container killed on request. Exit code is 137" error. These errors can happen in different job stages, both in narrow and wide transformations. YARN Containers can also be killed by the OS oom_reaper when the OS is running out of memory, causing the "Container killed on request. Exit code is 137" error.

Resolution

Use one or more of the following methods to resolve "Exit status: 137" stage failures.

Increase driver or executor memory

Increase container memory by tuning the spark.executor.memory or spark.driver.memory parameters (depending on which container caused the error).

On a running cluster:

Modify spark-defaults.conf on the master node. Example:

sudo vim /etc/spark/conf/spark-defaults.conf
spark.executor.memory 10g
spark.driver.memory 10g

For a single job:

Use the --executor-memory or --driver-memory option to increase memory when you run spark-submit. Example:

spark-submit --executor-memory 10g --driver-memory 10g ...

Add more Spark partitions

If you can't increase container memory (for example, if you're using maximizeResourceAllocation on the node), then increase the number of Spark partitions. Doing this reduces the amount of data that's processed by a single Spark task, and that reduces the overall memory used by a single executor. Use the following Scala code to add more Spark partitions:

val numPartitions = 500
val newDF = df.repartition(numPartitions)

Increase the number of shuffle partitions

If the error happens during a wide transformation (for example join or groupBy), add more shuffle partitions. The default value is 200.

On a running cluster:

Modify spark-defaults.conf on the master node. Example:

sudo vim /etc/spark/conf/spark-defaults.conf
spark.sql.shuffle.partitions 500

For a single job:

Use the --conf spark.sql.shuffle.partitions option to add more shuffle partitions when you run spark-submit. Example:

spark-submit --conf spark.sql.shuffle.partitions=500 ...

Reduce the number of executor cores

Reducing the number of executor cores reduces the maximum number of tasks that the executor processes simultaneously. Doing this reduces the amount of memory that the container uses.

On a running cluster:

Modify spark-defaults.conf on the master node. Example:

sudo vim /etc/spark/conf/spark-defaults.conf
spark.executor.cores  1

For a single job:

Use the --executor-cores option to reduce the number of executor cores when you run spark-submit. Example:

spark-submit --executor-cores 1 ...

Increase instance size

YARN containers can also be killed by the OS oom_reaper when the OS is running out of memory. If this error happens due to oom_reaper, use a larger instance with more RAM. You can also lower yarn.nodemanager.resource.memory-mb to keep YARN containers from using up all of the Amazon EC2's ram.

You can detect if the error is due to oom_reaper by reviewing your Amazon EMR Instance logs for the dmesg command output. Start by finding the core or task node where the killed YARN container was running. You can find this information by using the YARN Resource Manager UI or logs. Then, check the Amazon EMR Instance state logs on this node before and after the container was killed to see what killed the process.

In the following example, the process with ID 36787 corresponding to YARN container_165487060318_0001_01_000244 was killed by the kernel (Linux's OOM killer):

# hows the kernel looking
dmesg | tail -n 25

[ 3910.032284] Out of memory: Kill process 36787 (java) score 96 or sacrifice child
[ 3910.043627] Killed process 36787 (java) total-vm:15864568kB, anon-rss:13876204kB, file-rss:0kB, shmem-rss:0kB
[ 3910.748373] oom_reaper: reaped process 36787 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

Related information

How do I resolve the error "Container killed by YARN for exceeding memory limits" in Spark on Amazon EMR?

How can I troubleshoot stage failures in Spark jobs on Amazon EMR?

AWS UFFICIALE
AWS UFFICIALEAggiornata 2 anni fa
Nessun commento