Can I force each job to run on a dedicated instance?

0

Hi folks,

I have some ETL jobs that is already containerized and as of today we do all orchestration by our self's in a kind of "in house made aws-batch version".

I'm considering to move to AWS Batch but in order to do so I would like to have each Job to run on a dedicate instance due to disk space requirements. By sharing an instance we risk running out space in the middle of the job execution.

From what I could see in terms of how AWS Batch handle compute requirements the decisions on when to launch a new instance is based solely on vCpu and Memory. Is there a way to instruct Batch to never send two jobs to the same running instance?

Thanks in advance,

질문됨 4년 전649회 조회
2개 답변
0
수락된 답변

There's no way to tell AWS Batch to schedule a particular job to a dedicated instance.
However, if you configure your compute environment to spawn EC2 instances that all have the same number of vCPUs, then submitting a job requesting as many vCPUs as are available on your instances will end up running on an instance dedicated to itself (basically, the job will consume all the vCPU of the instance, so batch isn't gonna schedule any additional job on it).
You will loose some flexibility on the instance choice front, but that should fit your requirements.
BR,
Arthur

답변함 4년 전
0

Ok, answering my own question I managed make this work by creating a compute resource with a specific ec2 instance and always submitting jobs that fully consume the instance's memory and vcpus.

I would be great if this could be achieved pragmatically (like using a flag or something).

Regards.

답변함 4년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠