By using AWS re:Post, you agree to the AWS re:Post Terms of Use

CPU & Memory Allocation for Task Definition for EC2 launch type in ECS Cluster

0

I have created a capacity provider for EC2 instances, and I have set a desired capacity of 2 in my Auto Scaling Group (ASG). This configuration creates two t3.micro instances that are registered under the ECS capacity provider. Now, I need to create a task definition for two containers. However, I encountered an issue when allocating the appropriate CPU and memory for the task size. As a result, the task is stuck in the provision state.

Could you please assist me in utilizing the CPU and memory resources of both instances to create a single task for multiple containers across multiple EC2 instances?

Being new to ECS services, I have a doubt: if we have multiple EC2 instances registered under the capacity provider of ECS, will it combine the CPU and memory of all EC2 instances?

1 Answer
0

To utilize the CPU and memory resources of multiple EC2 instances to run a single task with multiple containers, you can use ECS capacity providers.

When you register an Auto Scaling Group as a capacity provider for your ECS cluster, ECS is able to view the combined resources of all the EC2 instances in that ASG.

You can then create a task definition specifying the total CPU and memory needed for your containers. ECS will schedule the task across multiple instances registered with the capacity provider to fulfill the resource requirements.

Some key points:

Make sure the capacity provider ASG has sufficient instances to meet the task resource needs. ECS will scale out the ASG if needed.

Use placement constraints and strategies as needed to control task placement across instances.

Individual containers do not need to be pinned to specific instances. ECS will distribute containers as needed.

Instance resources are not literally combined. ECS schedules container resources across multiple instances transparently.

Tasks remain isolated even if distributed across instances. Cpu, memory are not shared between tasks.

profile picture
EXPERT
answered 10 months ago
  • Hi Giovanni, Thanks for answering the question.

    I'm unable to decide where I should specify the cpu and memory while creating the task definition. Should it be done at the infrastructure requirements tab OR should it be specified at containers tab?

    Scenario - 2 t3.micro machines registered to the ecs cluster. I want to run 2 java microservices containers What should I specify under infrastructure requirements - Should i give the combined value of 2 micro machines with CPU- 2vcpu and mem as 2 gb? Or Should it be done at container tab by setting the cpu and memory limits - For each container specifying cpu as 1vcpu and mem as 1gb

    I had created a task definition where I had specified the cpu as 1vcpu and 0.8gb as memory under infra requirements. This was able to create and run the containers, but it had launched both the containers in the same machine, which was too much for a micro machine and the machine crashed.I even tried adding a placement strategy where I specified for it to spread across instance id's, but it still launched the containers in a single machine. Please provide guidance on this.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions