BEST_FIT_PROGRESSIVE works weird, maxiumum vcpus = 4, gave me 96

0

BEST_FIT_PROGRESSIVE works weird, maxiumum vcpus = 4, gave me 96.
Essentially asked for for 4cpus, m5.xlarge, gave me m5.xlarge24.

The auto scaling group tried to spin up 4 instances for 1 job, assuming the other 3 instances were also m5.xlarge24s

looloo
已提问 4 年前529 查看次数
3 回答
0

Can you be a little more detailed about what happened here?

One thing with the new allocation strategies: ASG for these strategies is attempting to spin up capacity in terms of vCPUs, not instances. So if you're looking at the "desired" field, that number is a total vCPU, not a desired instance count. This will be made more explicit in later updates.

AWS
已回答 4 年前
0

I made a new compute environment with best_fit_progressive strategy with maximum vCPUs set to 4.
When i submit a job to that compute environment, it ignored the maximum vCPUs number and gave me an instance with 96 vCPUs. I have since deleted that compute environment, and made a new one with allocation_strategy set to best_fit and it honoured the the maximum vCPUs setting. It also tried to spin up 4 separate instances for one job, I had different jobs on different queues connected to different compute environments, i'm assuming the scheduler picked up jobs for that queue it wasnt supposed to.

Edited by: looloo on Nov 4, 2019 11:21 PM

looloo
已回答 4 年前
0

from the docs: With both BEST_FIT_PROGRESSIVE and SPOT_CAPACITY_OPTIMIZED strategies, AWS Batch may need to go above maxvCpus to meet your capacity requirements. In this event, AWS Batch will never go above maxvCpus by more than a single instance.

In this case, i would take out the largest vCPU instance from your allowed instance parameters

AWS
已回答 4 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则