There's no way to tell AWS Batch to schedule a particular job to a dedicated instance.
However, if you configure your compute environment to spawn EC2 instances that all have the same number of vCPUs, then submitting a job requesting as many vCPUs as are available on your instances will end up running on an instance dedicated to itself (basically, the job will consume all the vCPU of the instance, so batch isn't gonna schedule any additional job on it).
You will loose some flexibility on the instance choice front, but that should fit your requirements.
Ok, answering my own question I managed make this work by creating a compute resource with a specific ec2 instance and always submitting jobs that fully consume the instance's memory and vcpus.
I would be great if this could be achieved pragmatically (like using a flag or something).
What is a complete JOB scheduler in AWSasked 8 months ago
Is it safe to spawn many disposable slices of limited lifetime or some kind of management strategy is required?Accepted Answerasked 5 years ago
How to pull a list of all folders in a Workdocs instance?asked 3 months ago
Do I have to redownload dataset to training job every time I run a Sagemaker Estimator training job?asked 10 months ago
Running each BATCH job on dedicated instance. How to?asked 3 years ago
How can I increase the number of devices that you can include in a test run in DeviceFarm?Accepted Answerasked 4 months ago
ETL Workflow Orchestration Step functions and/or Glue Workflows??Accepted Answerasked 3 years ago
Can I force each job to run on a dedicated instance?Accepted Answerasked 3 years ago
what are advantages of running ETL jobs in aws glue?asked 4 months ago
How do I run an IoT Job on a GG v2 core?asked 2 years ago