- 最新
- 最多得票
- 最多評論
Hi Max,
My job is running with this config "--conf spark.executor.cores=1 --conf spark.executor.memory=2g --conf spark.driver.cores=1 --conf spark.driver.memory=2g --conf spark.executor.instances=1 " and the ServiceQuotaLimit is 16 vCPUs. I am not able to understand how this is adding up to 16. Need to understand that to calculate what limit I should request for.
From the documentation, it seems that spark.dynamicAllocation.enabled
is True by default, the spark.dynamicAllocation.maxExecutors
default value is infinite (for version 6.10.0 and higher):
https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/jobs-spark.html so our job was creating a high number of workers. We are going to disable this option and see if we still hit the vCPU limits.
You can read more about vCPU account limits here: https://aws.amazon.com/blogs/compute/preview-vcpu-based-instance-limits/
To request an increase - first determine how many vCPUs you need - then open a support case, and ask for a limit increase to the number of vCPUs that you need. Follow the process discussed in the EC2 Knowledge Center Article on vCPU Limit Increases.
相關內容
- AWS 官方已更新 1 年前
- AWS 官方已更新 1 年前
- AWS 官方已更新 2 年前
- AWS 官方已更新 3 年前
Disabling dynamicAllocation worked for us