- Newest
- Most votes
- Most comments
Can you be a little more detailed about what happened here?
One thing with the new allocation strategies: ASG for these strategies is attempting to spin up capacity in terms of vCPUs, not instances. So if you're looking at the "desired" field, that number is a total vCPU, not a desired instance count. This will be made more explicit in later updates.
I made a new compute environment with best_fit_progressive strategy with maximum vCPUs set to 4.
When i submit a job to that compute environment, it ignored the maximum vCPUs number and gave me an instance with 96 vCPUs. I have since deleted that compute environment, and made a new one with allocation_strategy set to best_fit and it honoured the the maximum vCPUs setting. It also tried to spin up 4 separate instances for one job, I had different jobs on different queues connected to different compute environments, i'm assuming the scheduler picked up jobs for that queue it wasnt supposed to.
Edited by: looloo on Nov 4, 2019 11:21 PM
from the docs: With both BEST_FIT_PROGRESSIVE and SPOT_CAPACITY_OPTIMIZED strategies, AWS Batch may need to go above maxvCpus to meet your capacity requirements. In this event, AWS Batch will never go above maxvCpus by more than a single instance.
In this case, i would take out the largest vCPU instance from your allowed instance parameters
Relevant content
- asked 3 years ago
- asked 2 years ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 2 years ago