By using AWS re:Post, you agree to the Terms of Use
/Amazon EC2 Auto Scaling/

Questions tagged with Amazon EC2 Auto Scaling

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

In CDK, how do you enable `associatePublicIpAddress` in an AutoScalingGroup that has a `mixedInstancesPolicy`?

I'm using AWS CDK and am trying to enable the associatePublicIpAddress property for an AutoScalingGroup that's using a launch template. My first attempt was to just set `associatePublicIpAddress: true`, but I get this error (https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/aws-autoscaling/lib/auto-scaling-group.ts#L1526-L1528) ```typescript // first attempt new asg.AutoScalingGroup(this, 'ASG', { associatePublicIpAddress: true, // here minCapacity: 1, maxCapacity: 1, vpc, vpcSubnets: { subnetType: SubnetType.PUBLIC, onePerAz: true, availabilityZones: [availabilityZone], }, mixedInstancesPolicy: { instancesDistribution: { spotMaxPrice: '1.00', onDemandPercentageAboveBaseCapacity: 0, }, launchTemplate: new LaunchTemplate(this, 'LaunchTemplate', { securityGroup: this._securityGroup, role, instanceType machineImage, userData: UserData.forLinux(), }), launchTemplateOverrides: [ { instanceType: InstanceType.of( InstanceClass.T4G, InstanceSize.NANO ), }, ], }, keyName, }) ``` ```typescript // I hit this error from the CDK if (props.associatePublicIpAddress) { throw new Error('Setting \'associatePublicIpAddress\' must not be set when \'launchTemplate\' or \'mixedInstancesPolicy\' is set'); } ``` My second attempt was to not set `associatePublicIpAddress` and see if it gets set automatically because the AutoScalingGroup is in a public availablity zone with an internet gateway. However, it still doesn't provision a public ip address. Has anyone been able to create an autoscaling group with a mix instance policy and an associated public ip?
1
answers
0
votes
13
views
asked 2 days ago

EC2 instances unhealthy when created via ASG using cdk.

I am creating an ASG which will have a classical load balancer . The desired number of instances is 5 , I am starting the asg creation using a userdata but even after experimenting multiple times the load balancer shows unhealthy hosts,i changed the subnet type of the vpc as public but the number of healthy host for the elb remains 0 . Below is the code segment ``` Vpc vpc=new Vpc(this,"MyVPC"); AutoScalingGroup asg = AutoScalingGroup.Builder.create(this,"AutoScalingGroup").vpcSubnets(SubnetSelection.builder() .subnetType(SubnetType.PUBLIC) .build()).vpc(vpc).instanceType(InstanceType.of(InstanceClass.BURSTABLE2, InstanceSize.MICRO)) .machineImage(new AmazonLinuxImage()).minCapacity(1).desiredCapacity(5).maxCapacity(10).build(); asg.addUserData("#!/bin/bash\n" + "# Use this for your user data (script from top to bottom)\n" + "# install httpd (Linux 2 version)\n" + "yum update -y\n" + "yum install -y httpd\n" + "systemctl start httpd\n" + "systemctl enable httpd\n" + "echo \"<h1>Hello World from $(hostname -f)</h1>\" > /var/www/html/index.html"); LoadBalancer loadbalancer=LoadBalancer.Builder.create(this,"ElasticLoadBalancer").vpc(vpc).internetFacing(Boolean.TRUE).healthCheck(software.amazon.awscdk.services.elasticloadbalancing.HealthCheck.builder().port(80).build()) .build(); loadbalancer.addTarget(asg); ListenerPort listenerPort = loadbalancer.addListener(LoadBalancerListener.builder().externalPort(80).build()); ``` Also the instances those are created by default via ASG cannot be accessed on the web(by hitting their public IP) even after changing the security groups or making them all in a public subnet they are not accessible from instance connect,neither the load balancer shows these hosts healthy
1
answers
0
votes
17
views
asked 20 days ago

ECS Capacity Provider Auto-Scaler Instance Selection

Hello, I am working with AWS ECS capacity providers to scale out instances for jobs we run. Those jobs have a large variation in the amount of memory that is needed per ECS task. Those memory needs are set at the task and container level. We have a capacity provider that is connected to an EC2 auto scaling group (asg). The asg has the instance selection so that we specify instance attributes. Here we gave it a large range for memory and cpu, and it shows hundreds of possible instances. When we run a small job (1GB of memory) it scales up a `m5.large` and `m6i.large` instance and the job runs. This is great because our task runs but the instance it selected is much larger than our needs. We then let the asg scale back down to 0. We then run a large job (16GB) and it begins scaling up. But it starts the same instance types as before. The instance types have 8GB of memory when our task needs double that on a single instance. In the case of the small job I would have expected the capacity provider to scale up only 1 instance that was closer in size to the memory needs to the job (1GB). And for the larger job I would have expected the capacity provider to scale up only 1 instance that had more than 16GB of memory to accommodate the job (16GB). Questions: * Is there a way to get capacity providers and autoscaling groups to be more responsive to the resource needs of the pending tasks? * Are there any configs I might have wrong? * Am I understanding something incorrectly? Are there any resources you would point me towards? * Is there a better approach to accomplish what I want with ECS? * Is the behavior I outlined actually to be expected? Thank you
1
answers
0
votes
19
views
asked 20 days ago

Auto Scaling Group not scaling based on ECS desired task count

I have an EC2-backed ECS cluster which contains a ASG (using Cluster Auto Scaling) that is allowed to scale between 1 and 5 EC2 instances. There is also a service defined on this cluster which is also set to scale between 1 and 5 tasks with each task reserving almost the full resources of a single instance. I have configured the service to scale it's desired task count depending on the size of various queues within an Amazon MQ instance which is all handled by CloudWatch alarms. The scaling of the desired task count works as expected but the ASG doesn't provision new EC2 instances to fit the amount of desired tasks unless I manually go in and change the desired capacity of the ASG. This means the new tasks never get deployed as ECS cant find any suitable instances to deploy them too. I dont know if i'm missing something but all the doumentation I have found on ECS Auto Scaling Groups is that it should scale instances to fit the total resources requested by the desired amount of tasks. If I manually increase the desired capacity in the ASG and add an additional task that gets deployed on that new instance then the `CapacityProviderReservation` still remains at 100%. If I then remove that second task then after a while the ASG will scale in and remove the instance that no longer has any tasks running on it which is the expected behaviour. Any pointers would be greatly appreciated. As a side note this is all setup using the Python CDK. Edit: Clarified that the ASG is currently using CAS (as far as I can tell) and added details about scaling in working as expected Many thanks Tom
1
answers
0
votes
40
views
asked a month ago

How to Configure stickiness and autoscaling in elasticbeanstalk application.

Hello, We have a application running on elasticbeanstalk that listens for client request and returns a stream segment. We have some requirements for application: 1) Client session should be sticky (all request for some session should go to same EC2) for specified time without any changes on client side. (we can't add cookie sending via client). As per my understanding application load balancer supports that and i enabled stickiness in load balancer. As per my understanding load balancer generated cookie are managed by load balancer and we do not need to send cookie through client side. 2) Based on CPU utilisation we need to auto scale instances, (when CPU load > 80%) we need to scale instances +1. Problem:- 1) When i request from multiple clients from same IP address. CPU load goes above 80% and new instance is launched. But after sometime i see CPU load going down . does this mean that 1 of these client are now connected to new instance and load is shared. That means stickiness is not working. Though It is not clear how to test it properly. However sometimes when i tried to stop new instance manually . No client has got any errors. When I stop first instance all client gets 404 error for sometime. How to check whether stickiness is working properly ? 2) If i get stickiness to work. As per my understanding Load will not be shared by new instance. So Average CPU usage will be same. So autoscaling will keep on launching new instance until max limit. How do i set stickiness with autoscaling feature. I set stickiness seconds to 86400 sec (24 hours) for safe side. Can someone please guide me how to configure stickiness and autoscaling proper way ?
3
answers
0
votes
34
views
asked a month ago

Design questions on asg, backup restore, ebs and efs

Hi experts, We are designing to deploy a BI application in AWS. We have a default setting to repave the ec2 instance every 14 days which means it will rebuild the whole cluster instances with services and bring back it to last known good state. We want to have a solution with no/minimal downtime. The application has different services provisioned on different ec2 instances. First server will be like a main node and rest are additional nodes with different services running on them. We install all additional nodes same way but configure services later in the code deploy. 1. Can we use asg? If yes, how can we distribute the topology? Which mean out of 5 instances, if one server repaves, then that server should come up with the same services as the previous one. Is there a way to label in asg saying that this server should configure as certain service? 1. Each server should have its own ebs volume and stores some data in it. - what is the fastest way to copy or attach the ebs volume to new repaves server without downtime? 2. For shared data we want to use EFS 3. for metadata from embedded Postgres - we need to take a backup periodically and restore after repave(create new instance with install and same service) - how can we achieve this without downtime? We do not want to use customized AMI as we have a big process for ami creation and we often need to change it if we want to add install and config in it. Sorry if this is a lot to answers. Some guidance is helpful.
1
answers
0
votes
10
views
asked 3 months ago

LoadBalancer health check fails but instance is not terminating

Hello, I have a load balancer which as you know keeps the health check for the web app/website. I have deployed nothing in my instance means no app/site so when anyone visits the Loadbalancer URL they see a 502 Bad gateway error which is fine. and also in the target group, it shows that an instance has failed the health check but the thing is that the auto-scaling group is not terminating the failed health check instance and replacing it. Below is the Cloudformation code ``` AutoScailingGroup: Type: AWS::AutoScaling::AutoScalingGroup Properties: VPCZoneIdentifier: - Fn::ImportValue: !Sub ${EnvironmentName}-PR1 - Fn::ImportValue: !Sub ${EnvironmentName}-PR2 LaunchConfigurationName: !Ref AppLaunchConfiguration MinSize: 1 MaxSize: 4 TargetGroupARNs: - Ref: WebAppTargetGroup AppLoadBalancer: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: SecurityGroups: - Ref: ApplicationLoadBalancerSecurityGroup Subnets: - Fn::ImportValue: !Sub ${EnvironmentName}-PU1 - Fn::ImportValue: !Sub ${EnvironmentName}-PU2 Tags: - Key: Name Value: !Ref EnvironmentName Listener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: !Ref WebAppTargetGroup LoadBalancerArn: !Ref AppLoadBalancer Port: "80" Protocol: HTTP LoadBalancerListenerRule: Type: AWS::ElasticLoadBalancingV2::ListenerRule Properties: Actions: - Type: forward TargetGroupArn: !Ref WebAppTargetGroup Conditions: - Field: path-pattern Values: [/] ListenerArn: !Ref Listener Priority: 1 WebAppTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: / HealthCheckProtocol: HTTP HealthCheckTimeoutSeconds: 8 HealthyThresholdCount: 2 Port: 80 Protocol: HTTP UnhealthyThresholdCount: 5 VpcId: Fn::ImportValue: Fn::Sub: "${EnvironmentName}-VPCID" ```
1
answers
0
votes
103
views
asked 3 months ago

How frequently does an ASG attempt to remove instances when current size is greater than desired?

I have an EC2 ASG that has size triggers based on CPU utilization. Usually, it follows the predictable pattern of scaling up during time of usage and removing instances as load decreases. My instances will sometimes mark themselves as protected from scale-in if they are working on something longer-running than their normal tasks. If all instances are protected, I'll get the message "Could not scale to desired capacity because all remaining instances are protected from scale-in" in cloud watch. It appears that following that message, the next scale-in attempt won't occur for quite a while - 10 hours later when this happened yesterday. Since my instances only protect themselves for a short amount of time, the scale in would have succeeded during most of that 10 hours. My question: is there a way to configure the ASG so that it would retry the scale-in sooner than 10 hours later? Or is there a way I could respond to the failed attempt and maybe an instance could take itself off-line? (I do understand that ideally the instances wouldn't protect themselves in the first place, and that's part of a larger update to the architecture. But a short-term fix to the existing solution would be great.) To respond to the questions: The Alarm triggered based on low utilization and immediately reduced the desired count. At that point the alarm was no longer set. I'm looking at the ASG Activity History pane where there isn't anything in between message 1 that indicates that the desired size was reduced and that no instance could be removed and message 2 that a particular instance was removed due to a difference between current and desired.
1
answers
0
votes
20
views
asked 5 months ago

aws_ssm_document addomainjoin error

I am struggling to get EC2 instances deployed via an ASG joined to the domain. I get the following error each time *New-SSMAssociation : Document schema version, 2.2, is not supported by association that is created with instance id* I have tried various schema versions detailed [Here](https://docs.aws.amazon.com/systems-manager/latest/userguide/document-schemas-features.html) however all fail with the same error **SSMdoc.tf** ``` resource "aws_ssm_document" "ad-join-domain" { name = "ad-join-domain" document_type = "Command" content = jsonencode( { "schemaVersion" = "2.2" "description" = "aws:domainJoin" "parameters" : { "directoryId" : { "description" : "(Required) The ID of the directory.", "type" : "String" }, "directoryName" : { "description" : "(Required) The name of the domain.", "type" : "String" }, "dnsIpAddresses" : { "description" : "(Required) The IP addresses of the DNS servers for your directory.", "type" : "StringList" }, }, "mainSteps" = [ { "action" = "aws:domainJoin", "name" = "domainJoin", "inputs" = { "directoryId" : data.aws_directory_service_directory.adgems.id, "directoryName" : data.aws_directory_service_directory.adgems.name, "dnsIpAddresses" : [data.aws_directory_service_directory.adgems.dns_ip_addresses] } } ] } ) } ``` template.tf ``` data "template_file" "ad-join-template" { template = <<EOF <powershell> Set-DefaultAWSRegion -Region eu-west-2 Set-Variable -name instance_id -value (Invoke-Restmethod -uri http://169.254.169.254/latest/meta-data/instance-id) New-SSMAssociation -InstanceId $instance_id -Name "${aws_ssm_document.ad-join-domain.name}" </powershell> EOF } ``` The template is then referenced in the ASG Launch Template user_data section. Getting onto the instance I can see the script/logs and have confirmed the variables set (instance id for example). Full error message from the PS running below ``` New-SSMAssociation : Document schema version, 2.2, is not supported by association that is created with instance id At C:\Windows\system32\config\systemprofile\AppData\Local\Temp\EC2Launch228430162\UserScript.ps1:3 char:5 + New-SSMAssociation -InstanceId $instance_id -Name "ad-join-domain ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Amazon.PowerShe...sociationCmdlet:NewSSMAssociationCmdlet) [New-SSMAs sociation], InvalidOperationException + FullyQualifiedErrorId : Amazon.SimpleSystemsManagement.Model.InvalidDocumentException,Amazon.PowerShell.Cmdlets. SSM.NewSSMAssociationCmdlet ```
1
answers
0
votes
39
views
asked 5 months ago

Should ECS/EC2 ASGProvider Capacity Provider be able to scale-up from zero, 0->1

Following from earlier thread https://repost.aws/questions/QU6QlY_u2VQGW658S8wVb0Cw/should-ecs-service-task-start-be-triggered-by-asg-capacity-0-1 , I've now attached a proper Capacity Provider, an Auto Scale Group provider to my ECS Cluster. Question TL;DR: should scaling an ECS Service 0->1 desired tasks be able to wake-up a previously scaled-to-zero ASG and have it scale 0->1 desired/running? So I've started with an ECS Service with a single task definition and Desired=1, backed by the ASG with Capacity Provider scaling - also starting with 1 Desired/InService ASG instance. I can then set the ECS Service Desired tasks to 0, and it stops the single running task, then `CapacityProviderReservation` goes from 100 to 0, and 15 minutes/sample later the Alarm is triggered, and the ASG shuts-down it's only instance, 1->0 Desired/running. If I later change the ECS Service Desired back to 1 - nothing happens, other than ECS noting that it has no capacity to place the task. Should this work? I have previously seen something similar working - `CapacityProviderReservation` jumps to 200 and an instance gets created, but this is not working for me now - that metric is stuck at 100, and no scale-up-from-zero (to one) occurs in the ASG, and the task cannot be started. Should this be expected to work? Reference blog https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/ suggests that `CapacityProviderReservation` should move to 200 if `M > 0 and N = 0`, but this seems to rely on a task in "Provisioning" state - will that even happen here, or is the ECS Service/Cluster giving-up and not getting that far, due to zero capacity?
2
answers
0
votes
107
views
asked 6 months ago
  • 1
  • 90 / page