1 Answer
- Newest
- Most votes
- Most comments
2
I found how to fix this issue.
I had to use the "spark-defaults" classifications and in the properties have something like {"spark.yarn.appMasterEnv.YOUR_ENV_VARIABLE": "the value"}.
This is explained in the Spark documentation (https://spark.apache.org/docs/latest/configuration.html#environment-variables), however I don't understand why it was working differently in EMR 5.x
answered 3 years ago
Relevant content
- asked 4 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
Ran into an issue with EMR 6.60 where the encoding of Jupyter Spark was for some reason in ASCII instead of UTF8 like in Zeppelin and others. The problem only existed in Jupyter for whatever reason. The settings I used for EMR 5.x for the same issue stopped working, using your suggest of spark-defaults fixed the issue. Thanks!