How do I resolve the "java.lang.ClassNotFoundException" in Spark on Amazon EMR?

4 minute read
0

When I use custom JAR files in a spark-submit or PySpark job on Amazon EMR, I get a java.lang.ClassNotFoundException error.

Short description

This error occurs when either of the following conditions is true:

  • The spark-submit job can't find the relevant files in the class path.
  • A bootstrap action or custom configuration is overriding the class paths. When this happens, the class loader picks up only the JAR files that exist in the location that you specified in your configuration.

Resolution

Check the stack trace to find the name of the missing class. Then, add the path of your custom JAR (containing the missing class) to the Spark class path. You can do this while the cluster is running, when you launch a new cluster, or when you submit a job.

On a running cluster

In /etc/spark/conf/spark-defaults.conf, append the path of your custom JAR to the class names that are specified in the error stack trace. In the following example, /home/hadoop/extrajars/* is the custom JAR path.

sudo vim /etc/spark/conf/spark-defaults.conf
spark.driver.extraClassPath <other existing jar locations>:/home/hadoop/extrajars/*
spark.executor.extraClassPath <other existing jar locations>:/home/hadoop/extrajars/*

On a new cluster

Append the custom JAR path to the existing class paths in /etc/spark/conf/spark-defaults.conf by supplying a configuration object when you create a cluster.

Note: To use this option, you must create a cluster using Amazon EMR release version 5.14.0 or later.

For Amazon EMR 5.14.0 to Amazon EMR 5.17.0, include the following:

[
  {
    "Classification": "spark-defaults",
    "Properties": {
      "spark.driver.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/home/hadoop/extrajars/*",
      "spark.executor.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/home/hadoop/extrajars/*"
    }
  }
]

For Amazon EMR 5.17.0 to Amazon EMR 5.18.0, include /usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar as the additional JAR path:

[
  {
    "Classification": "spark-defaults",
    "Properties": {
      "spark.driver.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar:/home/hadoop/extrajars/*",
      "spark.executor.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar:/home/hadoop/extrajars/*"
    }
  }
]

For Amazon EMR 5.19.0 to Amazon EMR 5.32.0, update the JAR path as follows:

[
  {
    "Classification": "spark-defaults",
    "Properties": {
      "spark.driver.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar:/home/hadoop/extrajars/*",
      "spark.executor.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar:/home/hadoop/extrajars/*"
    }
  }
]

For Amazon EMR 5.33.0 to Amazon EMR 5.34.0, update the JAR path as follows:

[
  {
    "Classification": "spark-defaults",
    "Properties": {
      "spark.driver.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar:/home/hadoop/extrajars/",
      "spark.executor.extraClassPath": "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/goodies/lib/emr-spark-goodies.jar:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar:/usr/share/aws/emr/s3select/lib/emr-s3-select-spark-connector.jar:/home/hadoop/extrajars/"
    }
  }
]

For Amazon EMR release version 6.0.0 and later, updating the JAR path using configuration doesn't work. With these versions, the .conf file includes several jar file paths. Also, the length of the configuration for each property that you're updating can't be more than 1024 characters. However, you can add a bootstrap action to pass the custom JAR location to spark-defaults.conf. For more information, see How do I update all Amazon EMR nodes after the bootstrap phase?

Create a bash script similar to the following:

Note:

  • Be sure to replace s3://doc-example-bucket/Bootstraps/script_b.sh with the Amazon Simple Storage Service (Amazon S3) path of your choice.
  • Be sure to replace /home/hadoop/extrajars/* with your custom JAR file path.
  • Be sure that the Amazon EMR execution role has permissions to access this S3 bucket.
#!/bin/bash
#
# This is an example of script_b.sh for changing /etc/spark/conf/spark-defaults.conf
#
while [ ! -f /etc/spark/conf/spark-defaults.conf ]
do
  sleep 1
done
#
# Now the file is available, do your work here
#
sudo sed -i '/spark.*.extraClassPath/s/$/:\/home\/hadoop\/extrajars\/\*/' /etc/spark/conf/spark-defaults.conf
exit 0

Launch the EMR cluster and add a bootstrap action similar to the following:

#!/bin/bash
pwd
aws s3 cp s3://doc-example-bucket/Bootstraps/script_b.sh .
chmod +x script_b.sh
nohup ./script_b.sh &

For a single job

Use the --jars option to pass the custom JAR path when you run spark-submit.

Example:

spark-submit --deploy-mode client --class org.apache.spark.examples.SparkPi --master yarn spark-examples.jar 100 --jars /home/hadoop/extrajars/*

Note: To prevent class conflicts, don't include standard JARs when using the --jars option. For example, don't include spark-core.jar because it already exists in the cluster.

For more information about configuring classifications, see Configure Spark.


Related information

Spark configuration

How do I resolve the error "Container killed by YARN for exceeding memory limits" in Spark on Amazon EMR?

AWS OFFICIAL
AWS OFFICIALUpdated 2 years ago