Questions tagged with Amazon EC2 Auto Scaling
Content language: English
Sort by most recent
I have an Auto Scaling Group using Lifecycle Hooks; the ASG is hooked up to an Elastic Load Balancer as well. When I perform an instance refresh on the ASG, the instances complete their launching lifecycle action relatively quickly and enters the `InService` state. However, the instance refresh then pauses for about 90 seconds after each instance enters the `InService` state, and the instance refresh status reason reads as following:
```
Waiting for remaining instances to be available. For example: i-XXXXXXXXXXXX has insufficient data to evaluate its health with Elastic Load Balancing.
```
Both the health check grace period and instance warmup are set to zero, and the associated target group is configured to consider instances healthy after 2 successful health checks with a 10-second interval. Given that, I'd expect to spend at most 20 seconds waiting for ELB health data, not the nearly 90 seconds I observe in practice. How can I configure my ASG to proceed with the next instance replacement immediately after the previously-replaced instance enters the `InService` state?
I have a question. I'm running my entire infrastructure in EC2. If I autoscale it 1st step is to create an AMI. But after taking creating an image. How should I capture and save my live data from user. Autoscaling is working fine but how should I save the data. I'm new to it. Please, If anyone have an idea please help.
I am trying to use ECS together with EC2 AutoscalingGroup(ASG) and ManagedScaling.
- AutoScalingGroupProvider has ManagedScaling `ENABLED` and TargetCapacity is set to 100.
- ASG has MinSize=0 and MaxSize=5.
When I create Service in the cluster (service DesiredCount=1), service fails to place the task with following error:
```
<<serviceName>> was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster.
```
I would expect ECS to automatically scale the ASG(add instance) and place the task on the new instance. If I manually change DesiredCapacity of ASG to 1, Service(task) gets deployed. Should not managed scaling take care of it? Can someone guide me on what am I doing wrong? Maybe I need to add some permissions somewhere or I don't know.
NOTE: I am using cloudformation to provision my infrastructure.
I am distributing ec2 auto scaling through code deploy.
I was deploying without a problem, but suddenly I got an error saying "CodeDeploy agent was not able to receive the lifecycle event. Check the CodeDeploy agent logs on your host and make sure the agent is running and can connect to the CodeDeploy server".
When I searched for the cause of the problem, the codedeploy agent was not installed and executed within ec2. After installing and running it, deploying again worked normally.
There is nothing related to the installation of the codedeploy agent in my deployment process. What is the cause of this sudden error and how can I deal with it?
Hello everyone! Currently I am working on a multiplayer game, while testing online matches using c5.xlarge fleets, I’ve found out that at least in the us-east-1 region the maximum number of instances is 15 for that instance type in that specific region, we know that AWS allow us to request a quota increase for it.
Example case:
Assuming we have 2 active fleets, what happens if both fleets are full and there are players wanting to join a game session, we use Flexmatch matchmaking and we know that a matchmaking ticket can wait until a game session is available for a player, but how could we create new fleets based on player demand so there will be always a game session available?.
My questions are:
1. Is there a way to automatically create additional fleets when needed based on some metrics?
2. Does GameLift have an auto scaler for fleets or could we use a similar AWS service for that purpose, like creating new fleets when needed?
3. Or are there any recommendations in a way to solve this requirement?
Thank you everyone for your help beforehand.
Hello !
I want to create an app that require a lot of computing power (an API who makes images with stable diffusion). So I’ll use EC2 instances to do the calculations. The entry point of my back-end will be an Amazon API Gateway, who’s only gonna handle a few requests only (like, 3), each with a very consistent (and known) workload. The number of user requests could greatly vary in a (relatively) short period of time (up and down).
What’s the best (and cost-effective) way to scale this workload ? I tried to look at "load balancer", but I didn’t found a good way to use it for this purpose. I was thinking about creating a SQS queue to store requests, and scale up my EC2 instances when too much requests stack up. It that a good idea ? If so, what’s the best way to do it ?
I’m all ears ! Thanks in advance.
I have almost finished converting all of my Autoscaling groups from launch configurations to launch templates, but am stuck at one autoscaling group in eu-north-1.
When I click the "Switch to launch template" link, I get a message "We encountered an unexpected error
Please refresh the page and try again." and a JavaScript error in the browser console (both in Firefox and Chromium):
"TypeError: this.props.createASGReq.LaunchTemplate is undefined"
I have been able to convert all other autoscaling groups, including one that is also in eu-north-1.
Is this a known bug and if so, is there way to circumvent this without disrupting my services ?
I tried setting the policy to apply autoscaling.
However, it was difficult to apply without any data.
So, I roughly thought about it, I set it to the feeling that the cpu or memory is about 10% from the maximum, and the maximum task is about twice as good, and the cooldown time was added about 1 minute to the deployment time.
I set it like this, but I wonder if there are any reference materials or standards for setting the policy.
I have an ECS cluster with a capacity provider backed by EC2 container instances and managed autoscaling. The autoscaling group permits scaling to 0 when no tasks are running/requested.
When a task is requested and there are no currently running instances, I'm finding the autoscaling group is provisioning two instances where one would have been fine to fulfill the request. Is anybody else seeing this behaviour? is it expected? This only happens when the ASG desired count is 0 - i.e. there are no requested tasks and no container instances running in the cluster.
Sometimes after I trigger a deployment of ECS service the new task set is stuck with 1 desired task and 0 pending. There are no new ECS events (the last one is "service xxx has reached a steady state.") Creating new deployments does not help. It just replaces the primary task set which is also getting stuck at 1 desired and 0 pending)

The service is using EC2 capacity providers and ECS deployment controller. My settings are min 100% healthy max 200%. There is 1 task running prior to deployment. There are multiple container instances available and agent logs on the instance do not show anything unusual. The cloud trail does not show any failed calls for ECS service.
Changing the desired count from 1 to 2 immediately creates 2 pending tasks.

Is there any extra information I can find? Is it possible that this is caused by a bug since there is no trace in ECS service events?
I started to follow AWS skill builder training for AWS Solution architect professional. But not sure whether there is anyway I can get the scripts and files they are using to try these labs on my own? Any idea?
Hi guys,
I've found that is possible to use math in a target tracking scaling policy for Amazon EC2 Auto Scaling (https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-target-tracking-metric-math.html)
I've tried to use same config in application auto-scaling for service-namespace ECS without success
This is my config
```
{
"CustomizedMetricSpecification": {
"Metrics": [
{
"Label": "ack_total",
"Id": "m1",
"MetricStat": {
"Metric": {
"MetricName": "rabbitmq_queue_messages_ack_total[switch_events]",
"Namespace": "Prometheus"
},
"Stat": "Average"
},
"ReturnData": false
},
{
"Label": "published_total",
"Id": "m2",
"MetricStat": {
"Metric": {
"MetricName": "rabbitmq_queue_messages_published_total[switch_events]",
"Namespace": "Prometheus"
},
"Stat": "Average"
},
"ReturnData": false
},
{
"Label": "Relation (ack_total + 1) / (published_total + 1)",
"Id": "e1",
"Expression": "(m1 + 1)/(m2 + 1)",
"ReturnData": true
}
]
},
"TargetValue": 1.0
}
```
This is the command I've used:
```
aws application-autoscaling put-scaling-policy --service-namespace ecs --policy-name rabbitmq-pub-ack-scaling-policy --scalable-dimension ecs:service:DesiredCount --resource-id "service/XXXXX/events" --policy-type TargetTrackingScaling --target-tracking-scaling-policy-configuration file://alarm-definition.json
```
I'm getting this error:
```
Parameter validation failed:
Missing required parameter in TargetTrackingScalingPolicyConfiguration.CustomizedMetricSpecification: "MetricName"
Missing required parameter in TargetTrackingScalingPolicyConfiguration.CustomizedMetricSpecification: "Namespace"
Missing required parameter in TargetTrackingScalingPolicyConfiguration.CustomizedMetricSpecification: "Statistic"
Unknown parameter in TargetTrackingScalingPolicyConfiguration.CustomizedMetricSpecification: "Metrics", must be one of: MetricName, Namespace, Dimensions, Statistic, Unit
```
based on doc available in https://docs.aws.amazon.com/autoscaling/application/APIReference/API_CustomizedMetricSpecification.html
seems impossible to use a custom metric with math
Best regards.