BEST_FIT_PROGRESSIVE works weird, maxiumum vcpus = 4, gave me 96

0

BEST_FIT_PROGRESSIVE works weird, maxiumum vcpus = 4, gave me 96.
Essentially asked for for 4cpus, m5.xlarge, gave me m5.xlarge24.

The auto scaling group tried to spin up 4 instances for 1 job, assuming the other 3 instances were also m5.xlarge24s

looloo
asked 4 years ago507 views
3 Answers
0

Can you be a little more detailed about what happened here?

One thing with the new allocation strategies: ASG for these strategies is attempting to spin up capacity in terms of vCPUs, not instances. So if you're looking at the "desired" field, that number is a total vCPU, not a desired instance count. This will be made more explicit in later updates.

AWS
answered 4 years ago
0

I made a new compute environment with best_fit_progressive strategy with maximum vCPUs set to 4.
When i submit a job to that compute environment, it ignored the maximum vCPUs number and gave me an instance with 96 vCPUs. I have since deleted that compute environment, and made a new one with allocation_strategy set to best_fit and it honoured the the maximum vCPUs setting. It also tried to spin up 4 separate instances for one job, I had different jobs on different queues connected to different compute environments, i'm assuming the scheduler picked up jobs for that queue it wasnt supposed to.

Edited by: looloo on Nov 4, 2019 11:21 PM

looloo
answered 4 years ago
0

from the docs: With both BEST_FIT_PROGRESSIVE and SPOT_CAPACITY_OPTIMIZED strategies, AWS Batch may need to go above maxvCpus to meet your capacity requirements. In this event, AWS Batch will never go above maxvCpus by more than a single instance.

In this case, i would take out the largest vCPU instance from your allowed instance parameters

AWS
answered 4 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions