- Newest
- Most votes
- Most comments
When you set an EC2 Capacity Provider reservation target to 100% (which is the value I generally recommend to customers), you're telling ECS to provision only the amount of compute needed to satisfy the current number of running tasks: no more, no less.
The CloudWatch metric CapacityProviderReservation reflects the current ratio of resource demand (i.e., your tasks) to resource supply (i.e., the EC2 instances on hand). When the current value is 100, it means everything is in equilibrium: supply is sufficient to meet demand. If the value goes over 100, it means the Auto Scaler needs to provision more instances, and when the value falls below 100, it means the Auto Scaler can terminate some instances.
So, in a steady state -- even when the number of ECS tasks and EC2 instances is 0 -- the value can be equal to 100. It's perfectly normal and it means "don't change anything."
Thank you for the clarification, so there is no problem with my config, and it is normal that when I start a task the CapacityProviderReservation is set to 200 and then 2 instances are started, then after a couple of minutes CapacityProviderReservation goes back to 100 and only the needed instance is kept. It is a pity that we have to pay 15 minutes of usage for an instance which is not needed each time we run a task
Relevant content
- asked 9 months ago
- Accepted Answerasked 7 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 19 days ago