Can I force each job to run on a dedicated instance?

0

Hi folks,

I have some ETL jobs that is already containerized and as of today we do all orchestration by our self's in a kind of "in house made aws-batch version".

I'm considering to move to AWS Batch but in order to do so I would like to have each Job to run on a dedicate instance due to disk space requirements. By sharing an instance we risk running out space in the middle of the job execution.

From what I could see in terms of how AWS Batch handle compute requirements the decisions on when to launch a new instance is based solely on vCpu and Memory. Is there a way to instruct Batch to never send two jobs to the same running instance?

Thanks in advance,

已提问 4 年前649 查看次数
2 回答
0
已接受的回答

There's no way to tell AWS Batch to schedule a particular job to a dedicated instance.
However, if you configure your compute environment to spawn EC2 instances that all have the same number of vCPUs, then submitting a job requesting as many vCPUs as are available on your instances will end up running on an instance dedicated to itself (basically, the job will consume all the vCPU of the instance, so batch isn't gonna schedule any additional job on it).
You will loose some flexibility on the instance choice front, but that should fit your requirements.
BR,
Arthur

已回答 4 年前
0

Ok, answering my own question I managed make this work by creating a compute resource with a specific ec2 instance and always submitting jobs that fully consume the instance's memory and vcpus.

I would be great if this could be achieved pragmatically (like using a flag or something).

Regards.

已回答 4 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则