- Le plus récent
- Le plus de votes
- La plupart des commentaires
Can you be a little more detailed about what happened here?
One thing with the new allocation strategies: ASG for these strategies is attempting to spin up capacity in terms of vCPUs, not instances. So if you're looking at the "desired" field, that number is a total vCPU, not a desired instance count. This will be made more explicit in later updates.
I made a new compute environment with best_fit_progressive strategy with maximum vCPUs set to 4.
When i submit a job to that compute environment, it ignored the maximum vCPUs number and gave me an instance with 96 vCPUs. I have since deleted that compute environment, and made a new one with allocation_strategy set to best_fit and it honoured the the maximum vCPUs setting. It also tried to spin up 4 separate instances for one job, I had different jobs on different queues connected to different compute environments, i'm assuming the scheduler picked up jobs for that queue it wasnt supposed to.
Edited by: looloo on Nov 4, 2019 11:21 PM
from the docs: With both BEST_FIT_PROGRESSIVE and SPOT_CAPACITY_OPTIMIZED strategies, AWS Batch may need to go above maxvCpus to meet your capacity requirements. In this event, AWS Batch will never go above maxvCpus by more than a single instance.
In this case, i would take out the largest vCPU instance from your allowed instance parameters
Contenus pertinents
- demandé il y a un an
- demandé il y a 4 mois
- demandé il y a 6 mois
- demandé il y a 3 mois
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a un an