Questions tagged with Amazon EC2 Auto Scaling
Content language: English
Sort by most recent
The deployment failed because a non-empty field was discovered on your Auto Scaling group that Code Deploy does not currently support copying
We are, as of an hour ago, getting the following error on all of out environments (multiple AWS accounts). Everything was working fine 6 hours ago. No changes to configuration of CodeDeploy or Autoscaling Groups have been made today. The deployment failed because a non-empty field was discovered on your Auto Scaling group that Code Deploy does not currently support copying. Unsupported fields: [DescribeAutoScalingGroupsResponse.DescribeAutoScalingGroupsResult.AutoScalingGroups.member.TrafficSources.member.Type] Are there any issues here that AWS are aware of? If not how to I see that the value of the variable is?
I have a CloudFormation template that creates an AutoScaling Group (AWS::AutoScaling::AutoScalingGroup) using a LaunchConfiguration (AWS::AutoScaling::LaunchConfiguration). I'm trying to convert it to use a LaunchTemplate (AWS::EC2::LaunchTemplate). I've updated the cloudformation template, creating a new launchTemplate resource. I created a CloudFormation Change set and I see that the LaunchConfiguration will be deleted, a LaunchTemplate will be created and the AutoScaleGroup will be updated. I used the same parameters as the existing CloudFormationTemplate. The error occurs when I attempt to Execute the changeset. ``` You must use a valid fully-formed launch template. The parameter groupName cannot be used with the parameter subnet ``` I'm supplying the Security Group ID via a parameter ``` SecurityGroups: - !Ref InstanceSecurityGroup ``` If I switch to using the group name, I get a different error regarding defaultVPCs Looking at a partial execution, I see that the Security GroupID is listed in the LaunchTemplate under the "Security Groups" and not the "Security Group IDs", which is where I would expect it. How can I update my CloudFormation template to use a LaunchTemplate?
I have an Auto Scaling Group using Lifecycle Hooks; the ASG is hooked up to an Elastic Load Balancer as well. When I perform an instance refresh on the ASG, the instances complete their launching lifecycle action relatively quickly and enters the `InService` state. However, the instance refresh then pauses for about 90 seconds after each instance enters the `InService` state, and the instance refresh status reason reads as following: ``` Waiting for remaining instances to be available. For example: i-XXXXXXXXXXXX has insufficient data to evaluate its health with Elastic Load Balancing. ``` Both the health check grace period and instance warmup are set to zero, and the associated target group is configured to consider instances healthy after 2 successful health checks with a 10-second interval. Given that, I'd expect to spend at most 20 seconds waiting for ELB health data, not the nearly 90 seconds I observe in practice. How can I configure my ASG to proceed with the next instance replacement immediately after the previously-replaced instance enters the `InService` state?
I have a question. I'm running my entire infrastructure in EC2. If I autoscale it 1st step is to create an AMI. But after taking creating an image. How should I capture and save my live data from user. Autoscaling is working fine but how should I save the data. I'm new to it. Please, If anyone have an idea please help.
I am trying to use ECS together with EC2 AutoscalingGroup(ASG) and ManagedScaling. - AutoScalingGroupProvider has ManagedScaling `ENABLED` and TargetCapacity is set to 100. - ASG has MinSize=0 and MaxSize=5. When I create Service in the cluster (service DesiredCount=1), service fails to place the task with following error: ``` <<serviceName>> was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster. ``` I would expect ECS to automatically scale the ASG(add instance) and place the task on the new instance. If I manually change DesiredCapacity of ASG to 1, Service(task) gets deployed. Should not managed scaling take care of it? Can someone guide me on what am I doing wrong? Maybe I need to add some permissions somewhere or I don't know. NOTE: I am using cloudformation to provision my infrastructure.
I am distributing ec2 auto scaling through code deploy. I was deploying without a problem, but suddenly I got an error saying "CodeDeploy agent was not able to receive the lifecycle event. Check the CodeDeploy agent logs on your host and make sure the agent is running and can connect to the CodeDeploy server". When I searched for the cause of the problem, the codedeploy agent was not installed and executed within ec2. After installing and running it, deploying again worked normally. There is nothing related to the installation of the codedeploy agent in my deployment process. What is the cause of this sudden error and how can I deal with it?
Hello everyone! Currently I am working on a multiplayer game, while testing online matches using c5.xlarge fleets, I’ve found out that at least in the us-east-1 region the maximum number of instances is 15 for that instance type in that specific region, we know that AWS allow us to request a quota increase for it. Example case: Assuming we have 2 active fleets, what happens if both fleets are full and there are players wanting to join a game session, we use Flexmatch matchmaking and we know that a matchmaking ticket can wait until a game session is available for a player, but how could we create new fleets based on player demand so there will be always a game session available?. My questions are: 1. Is there a way to automatically create additional fleets when needed based on some metrics? 2. Does GameLift have an auto scaler for fleets or could we use a similar AWS service for that purpose, like creating new fleets when needed? 3. Or are there any recommendations in a way to solve this requirement? Thank you everyone for your help beforehand.
Hello ! I want to create an app that require a lot of computing power (an API who makes images with stable diffusion). So I’ll use EC2 instances to do the calculations. The entry point of my back-end will be an Amazon API Gateway, who’s only gonna handle a few requests only (like, 3), each with a very consistent (and known) workload. The number of user requests could greatly vary in a (relatively) short period of time (up and down). What’s the best (and cost-effective) way to scale this workload ? I tried to look at "load balancer", but I didn’t found a good way to use it for this purpose. I was thinking about creating a SQS queue to store requests, and scale up my EC2 instances when too much requests stack up. It that a good idea ? If so, what’s the best way to do it ? I’m all ears ! Thanks in advance.
I tried setting the policy to apply autoscaling. However, it was difficult to apply without any data. So, I roughly thought about it, I set it to the feeling that the cpu or memory is about 10% from the maximum, and the maximum task is about twice as good, and the cooldown time was added about 1 minute to the deployment time. I set it like this, but I wonder if there are any reference materials or standards for setting the policy.
I have an ECS cluster with a capacity provider backed by EC2 container instances and managed autoscaling. The autoscaling group permits scaling to 0 when no tasks are running/requested. When a task is requested and there are no currently running instances, I'm finding the autoscaling group is provisioning two instances where one would have been fine to fulfill the request. Is anybody else seeing this behaviour? is it expected? This only happens when the ASG desired count is 0 - i.e. there are no requested tasks and no container instances running in the cluster.
Sometimes after I trigger a deployment of ECS service the new task set is stuck with 1 desired task and 0 pending. There are no new ECS events (the last one is "service xxx has reached a steady state.") Creating new deployments does not help. It just replaces the primary task set which is also getting stuck at 1 desired and 0 pending) ![Stuck state](/media/postImages/original/IMbLhbZ2hmQlCU982yEEpjdA) The service is using EC2 capacity providers and ECS deployment controller. My settings are min 100% healthy max 200%. There is 1 task running prior to deployment. There are multiple container instances available and agent logs on the instance do not show anything unusual. The cloud trail does not show any failed calls for ECS service. Changing the desired count from 1 to 2 immediately creates 2 pending tasks. ![Unstuck](/media/postImages/original/IMgvsjFA8pSzCOadxKnpnmAg) Is there any extra information I can find? Is it possible that this is caused by a bug since there is no trace in ECS service events?