EMR spark no module named

0

Hi,

I'm trying to run a python job on EMR with some dependencies installed with venv as following

python -m venv pyspark_venv
source pyspark_venv/bin/activate
pip install pyarrow pandas venv-pack
venv-pack -o pyspark_venv.tar.gz

and the job runs with following conf

{
   "spark.yarn.appMasterEnv.PYSPARK_PYTHON":"./environment/bin/python",
   "spark.yarn.appMasterEnv.PYSPARK_DRIVER_PYTHON":"./environment/bin/python",
   "spark.yarn.dist.archives":"s3://DOC-EXAMPLE-BUCKET/prefix/my_pyspark_venv.tar.gz#environment",
   "spark.submit.deployMode":"cluster"
}

but when I run the job I get no module named Pandas. Locally the script runs correctly with the same venv and printing sys path seems that spark is using the system python instead of venv

Any idea on the conf to apply to use venv? Thanks

Paolo
已提問 1 年前檢視次數 839 次
1 個回答
1
已接受的答案

It looks like your EMR Spark job is not able to find the packages installed in your virtual environment. To ensure that Spark is using the Python environment in your virtual environment, you can try the following:

  1. Add the following line to your EMR Spark job configuration to ensure that Spark uses the Python binary from your virtual environment:
"spark.executorEnv.PYTHONHASHSEED":"0"
  1. In your PySpark code, add the following lines to explicitly set the Python environment to use:
import os
os.environ['PYSPARK_PYTHON'] = './environment/bin/python'
os.environ['PYSPARK_DRIVER_PYTHON'] = './environment/bin/python'
  1. Make sure that the pyspark_venv.tar.gz file is uploaded to your S3 bucket with read permissions.
  2. Verify that the virtual environment is successfully extracted by checking the logs in the yarn/userlogs directory.
hash
已回答 1 年前
AWS
支援工程師
已審閱 1 個月前
  • Thanks for the answer, after further test in the end was the version of python not compatible

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南