EMR spark no module named

0

Hi,

I'm trying to run a python job on EMR with some dependencies installed with venv as following

python -m venv pyspark_venv
source pyspark_venv/bin/activate
pip install pyarrow pandas venv-pack
venv-pack -o pyspark_venv.tar.gz

and the job runs with following conf

{
   "spark.yarn.appMasterEnv.PYSPARK_PYTHON":"./environment/bin/python",
   "spark.yarn.appMasterEnv.PYSPARK_DRIVER_PYTHON":"./environment/bin/python",
   "spark.yarn.dist.archives":"s3://DOC-EXAMPLE-BUCKET/prefix/my_pyspark_venv.tar.gz#environment",
   "spark.submit.deployMode":"cluster"
}

but when I run the job I get no module named Pandas. Locally the script runs correctly with the same venv and printing sys path seems that spark is using the system python instead of venv

Any idea on the conf to apply to use venv? Thanks

Paolo
已提问 1 年前840 查看次数
1 回答
1
已接受的回答

It looks like your EMR Spark job is not able to find the packages installed in your virtual environment. To ensure that Spark is using the Python environment in your virtual environment, you can try the following:

  1. Add the following line to your EMR Spark job configuration to ensure that Spark uses the Python binary from your virtual environment:
"spark.executorEnv.PYTHONHASHSEED":"0"
  1. In your PySpark code, add the following lines to explicitly set the Python environment to use:
import os
os.environ['PYSPARK_PYTHON'] = './environment/bin/python'
os.environ['PYSPARK_DRIVER_PYTHON'] = './environment/bin/python'
  1. Make sure that the pyspark_venv.tar.gz file is uploaded to your S3 bucket with read permissions.
  2. Verify that the virtual environment is successfully extracted by checking the logs in the yarn/userlogs directory.
hash
已回答 1 年前
AWS
支持工程师
已审核 1 个月前
  • Thanks for the answer, after further test in the end was the version of python not compatible

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则