Can I force each job to run on a dedicated instance?

0

Hi folks,

I have some ETL jobs that is already containerized and as of today we do all orchestration by our self's in a kind of "in house made aws-batch version".

I'm considering to move to AWS Batch but in order to do so I would like to have each Job to run on a dedicate instance due to disk space requirements. By sharing an instance we risk running out space in the middle of the job execution.

From what I could see in terms of how AWS Batch handle compute requirements the decisions on when to launch a new instance is based solely on vCpu and Memory. Is there a way to instruct Batch to never send two jobs to the same running instance?

Thanks in advance,

feita há 4 anos649 visualizações
2 Respostas
0
Resposta aceita

There's no way to tell AWS Batch to schedule a particular job to a dedicated instance.
However, if you configure your compute environment to spawn EC2 instances that all have the same number of vCPUs, then submitting a job requesting as many vCPUs as are available on your instances will end up running on an instance dedicated to itself (basically, the job will consume all the vCPU of the instance, so batch isn't gonna schedule any additional job on it).
You will loose some flexibility on the instance choice front, but that should fit your requirements.
BR,
Arthur

respondido há 4 anos
0

Ok, answering my own question I managed make this work by creating a compute resource with a specific ec2 instance and always submitting jobs that fully consume the instance's memory and vcpus.

I would be great if this could be achieved pragmatically (like using a flag or something).

Regards.

respondido há 4 anos

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas