Questions tagged with AWS Auto Scaling
Content language: English
Sort by most recent
**Environment** * ECS & Fargate * Scaling policy ![Enter image description here](/media/postImages/original/IMynRiZIPgSvS-APSgV7VUig) And I got scale-in alarm like: ![Enter image description here](/media/postImages/original/IMzIBcGhwqQBWiTD6KyyLZbg) But, ECS doesn't scale-in since 18:43 ![Enter image description here](/media/postImages/original/IMVCikRhtUQOiOM6Dy2_7bDg) ![Enter image description here](/media/postImages/original/IMihLesHBIQnebIl7snFwd1g) I don't use scale-in protection option. Please let me find the reason if you experienced similar problems.
I have a question. I'm running my entire infrastructure in EC2. If I autoscale it 1st step is to create an AMI. But after taking creating an image. How should I capture and save my live data from user. Autoscaling is working fine but how should I save the data. I'm new to it. Please, If anyone have an idea please help.
I am trying to use ECS together with EC2 AutoscalingGroup(ASG) and ManagedScaling. - AutoScalingGroupProvider has ManagedScaling `ENABLED` and TargetCapacity is set to 100. - ASG has MinSize=0 and MaxSize=5. When I create Service in the cluster (service DesiredCount=1), service fails to place the task with following error: ``` <<serviceName>> was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster. ``` I would expect ECS to automatically scale the ASG(add instance) and place the task on the new instance. If I manually change DesiredCapacity of ASG to 1, Service(task) gets deployed. Should not managed scaling take care of it? Can someone guide me on what am I doing wrong? Maybe I need to add some permissions somewhere or I don't know. NOTE: I am using cloudformation to provision my infrastructure.
Looking to build a 3 tier application web application (not serverless) for lab purpose. I plan to leverage services in the following order: User > Route53 > ALB > EC2 > RDS, will integrate S3 in the architecture as well. I tied to boring default template I can use out of the box with wordpress. So I was wondering if you may point to a not so boring web app i can use in my architecture. For example, ride, dental or any other interesting template I can use. Thanks in advance :-)
Hi I was doing a POC on AWS DevOps Guru As per the documentation it support analyzing EC2 hosted by an ASG. So I launched an EC2 instance using an Auto Scaling Group and Enabled AWS DevOps Guru for the instance by resource Tag. As per DevOps Guru, it will detect anomalies automatically and generate insight We don't have access to know how DevOps guru working to analyze the EC2 or to set threshold to generate insights So to test I put 99% CPU load on the EC2 instance. In Metric it was showing the CPU Utilization is 99%, and it last for 4-6 hours (the manual stress) But no Insight generated by DevOps Guru when the CPU is at 99-100% for 4-6 hours. We tried this for 2 weeks daily but same result, no insight generated. We put the EC2 stress by using 'stress' package. (stress demo - https://www.cyberciti.biz/faq/stress-test-linux-unix-server-with-stress-ng/) So how to detect EC2 anomaly via DevOps Guru when CPU is at 99% ? Its now working as per the DevOps Guru documentation. Any help will be appreciated. ![DevOps Guru analyze resource](/media/postImages/original/IMGtkqGraOSwOBf9eOJE-D3Q)![CPU Utilization 99%](/media/postImages/original/IM06nz_UmxT36gIgHdF09H1g)
Hello everyone! Currently I am working on a multiplayer game, while testing online matches using c5.xlarge fleets, I’ve found out that at least in the us-east-1 region the maximum number of instances is 15 for that instance type in that specific region, we know that AWS allow us to request a quota increase for it. Example case: Assuming we have 2 active fleets, what happens if both fleets are full and there are players wanting to join a game session, we use Flexmatch matchmaking and we know that a matchmaking ticket can wait until a game session is available for a player, but how could we create new fleets based on player demand so there will be always a game session available?. My questions are: 1. Is there a way to automatically create additional fleets when needed based on some metrics? 2. Does GameLift have an auto scaler for fleets or could we use a similar AWS service for that purpose, like creating new fleets when needed? 3. Or are there any recommendations in a way to solve this requirement? Thank you everyone for your help beforehand.
Dear Gurus, Please see that i have configured Unified Cloud Watch agent on my Linux and Windows based systems and i can successfully able to see the Memory and Disk Metrics. Now point of catch here is what happens when we have Auto Scaling enabled on it. Some time EC2 instances get increases when load get increased, and some time it decreases. How we can handle Cloud Watch Metrics in that scenario. Thanks Malik Adeel Imtiaz
Hi, I have configured step scaling policies for my ECS Service . The cpu & memory utilization always remain under utilization as we are not release yet. I want the "scale in " cloud watch alarm to go in OK state if the desired number of ECS service is met . How to stop "scale in " the cloud watch alarm from staying in an "ALARM" state even though the desired number of instances in the auto scaling group has been met.
We have an Aurora RDS cluster with replica autoscaling. For analysis intensive tasks, we want to temporarily launch a separate reader until the analysis job is done. How can we exclude this reader from traffic from the cluster reader endpoint, and exclude its load from affecting the autoscaling actions? We can do it by script or cli if necessary.
Hi, I am using Memcached cluster with one node. I did setup an alarms which will kickoff either 50% CPU or 50% Memory. Can I add a node when alarm gets triggered using auto scaling? fyi: I'm using Terraform to build my AWS infrastructure. Thanks in advance.
I configured autoscaling using aws ecs fargate. For example, if I set the target value of cpu to 80 among the service indicators, I know that if the cpu usage rate is over 80%, the number of tasks will be increased by scaling out, and if it is less than 80% in the increased state, it will be scaled in and the number of tasks will be reduced. What I am concerned about is that in case of scale in, the cpu target value is 80, so if it is less than 80, the number of tasks will decrease. Therefore, after the task is executed, it will be scaled in immediately, and the number of tasks will immediately drop below 80. (In case there is no cooldown period) I think you can prevent the number of tasks from decreasing right away by setting a scale-in cooldown period, but it would be more useful if you additionally set a service indicator target value that only scales in. Is there a way to separately set the target value to which only scale in is applied in the ecs fargate service? Or is there a way other than setting the cooldown period?
I tried setting the policy to apply autoscaling. However, it was difficult to apply without any data. So, I roughly thought about it, I set it to the feeling that the cpu or memory is about 10% from the maximum, and the maximum task is about twice as good, and the cooldown time was added about 1 minute to the deployment time. I set it like this, but I wonder if there are any reference materials or standards for setting the policy.