Questions tagged with Amazon EC2 Auto Scaling

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

AWS AMI ami-0c7328b3dbb11f540 (Windows_Server-2012-R2_RTM-English-64Bit-Base-2022.07.13) seems corrupt. It can't run basic powershell commands upon autoscaling launch due to missing files?

AWS released the latest Windows Server 2012 R2 AMIs on 7/13/2022. Upon testing this morning, I found that ami-0c7328b3dbb11f540 is corrupt and will not properly run powershell commands upon launch due to missing files in the filesystem. Reverting to the June image (ami-09e13647920b2ba1d) allowed our autoscaling EC2 instances to launch properly. For specifics, below is the error that triggered an autoscaling build failure. ``` 2022-07-15 13:43:32,501 [ERROR] -----------------------BUILD FAILED!------------------------ 2022-07-15 13:43:32,501 [ERROR] Unhandled exception during build: [Errno 2] No such file or directory: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmp2_4a69w_/subp-stderr-8e0905c1-1293-4379-bbb8-bf88a2978290.txt' Traceback (most recent call last): File "cfn-init", line 176, in <module> File "cfnbootstrap\construction.pyc", line 137, in build File "cfnbootstrap\construction.pyc", line 564, in build File "cfnbootstrap\construction.pyc", line 578, in run_config File "cfnbootstrap\construction.pyc", line 146, in run_commands File "cfnbootstrap\command_tool.pyc", line 92, in apply File "cfnbootstrap\util.pyc", line 587, in call File "cfnbootstrap\util.pyc", line 562, in call FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmp2_4a69w_/subp-stderr-8e0905c1-1293-4379-bbb8-bf88a2978290.txt' ```
4
answers
0
votes
197
views
asked 5 months ago

In CDK, how do you enable `associatePublicIpAddress` in an AutoScalingGroup that has a `mixedInstancesPolicy`?

I'm using AWS CDK and am trying to enable the associatePublicIpAddress property for an AutoScalingGroup that's using a launch template. My first attempt was to just set `associatePublicIpAddress: true`, but I get this error (https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/aws-autoscaling/lib/auto-scaling-group.ts#L1526-L1528) ```typescript // first attempt new asg.AutoScalingGroup(this, 'ASG', { associatePublicIpAddress: true, // here minCapacity: 1, maxCapacity: 1, vpc, vpcSubnets: { subnetType: SubnetType.PUBLIC, onePerAz: true, availabilityZones: [availabilityZone], }, mixedInstancesPolicy: { instancesDistribution: { spotMaxPrice: '1.00', onDemandPercentageAboveBaseCapacity: 0, }, launchTemplate: new LaunchTemplate(this, 'LaunchTemplate', { securityGroup: this._securityGroup, role, instanceType machineImage, userData: UserData.forLinux(), }), launchTemplateOverrides: [ { instanceType: InstanceType.of( InstanceClass.T4G, InstanceSize.NANO ), }, ], }, keyName, }) ``` ```typescript // I hit this error from the CDK if (props.associatePublicIpAddress) { throw new Error('Setting \'associatePublicIpAddress\' must not be set when \'launchTemplate\' or \'mixedInstancesPolicy\' is set'); } ``` My second attempt was to not set `associatePublicIpAddress` and see if it gets set automatically because the AutoScalingGroup is in a public availablity zone with an internet gateway. However, it still doesn't provision a public ip address. Has anyone been able to create an autoscaling group with a mix instance policy and an associated public ip?
1
answers
0
votes
78
views
asked 5 months ago

EC2 instances unhealthy when created via ASG using cdk.

I am creating an ASG which will have a classical load balancer . The desired number of instances is 5 , I am starting the asg creation using a userdata but even after experimenting multiple times the load balancer shows unhealthy hosts,i changed the subnet type of the vpc as public but the number of healthy host for the elb remains 0 . Below is the code segment ``` Vpc vpc=new Vpc(this,"MyVPC"); AutoScalingGroup asg = AutoScalingGroup.Builder.create(this,"AutoScalingGroup").vpcSubnets(SubnetSelection.builder() .subnetType(SubnetType.PUBLIC) .build()).vpc(vpc).instanceType(InstanceType.of(InstanceClass.BURSTABLE2, InstanceSize.MICRO)) .machineImage(new AmazonLinuxImage()).minCapacity(1).desiredCapacity(5).maxCapacity(10).build(); asg.addUserData("#!/bin/bash\n" + "# Use this for your user data (script from top to bottom)\n" + "# install httpd (Linux 2 version)\n" + "yum update -y\n" + "yum install -y httpd\n" + "systemctl start httpd\n" + "systemctl enable httpd\n" + "echo \"<h1>Hello World from $(hostname -f)</h1>\" > /var/www/html/index.html"); LoadBalancer loadbalancer=LoadBalancer.Builder.create(this,"ElasticLoadBalancer").vpc(vpc).internetFacing(Boolean.TRUE).healthCheck(software.amazon.awscdk.services.elasticloadbalancing.HealthCheck.builder().port(80).build()) .build(); loadbalancer.addTarget(asg); ListenerPort listenerPort = loadbalancer.addListener(LoadBalancerListener.builder().externalPort(80).build()); ``` Also the instances those are created by default via ASG cannot be accessed on the web(by hitting their public IP) even after changing the security groups or making them all in a public subnet they are not accessible from instance connect,neither the load balancer shows these hosts healthy
1
answers
0
votes
40
views
asked 6 months ago

ECS Capacity Provider Auto-Scaler Instance Selection

Hello, I am working with AWS ECS capacity providers to scale out instances for jobs we run. Those jobs have a large variation in the amount of memory that is needed per ECS task. Those memory needs are set at the task and container level. We have a capacity provider that is connected to an EC2 auto scaling group (asg). The asg has the instance selection so that we specify instance attributes. Here we gave it a large range for memory and cpu, and it shows hundreds of possible instances. When we run a small job (1GB of memory) it scales up a `m5.large` and `m6i.large` instance and the job runs. This is great because our task runs but the instance it selected is much larger than our needs. We then let the asg scale back down to 0. We then run a large job (16GB) and it begins scaling up. But it starts the same instance types as before. The instance types have 8GB of memory when our task needs double that on a single instance. In the case of the small job I would have expected the capacity provider to scale up only 1 instance that was closer in size to the memory needs to the job (1GB). And for the larger job I would have expected the capacity provider to scale up only 1 instance that had more than 16GB of memory to accommodate the job (16GB). Questions: * Is there a way to get capacity providers and autoscaling groups to be more responsive to the resource needs of the pending tasks? * Are there any configs I might have wrong? * Am I understanding something incorrectly? Are there any resources you would point me towards? * Is there a better approach to accomplish what I want with ECS? * Is the behavior I outlined actually to be expected? Thank you
1
answers
0
votes
123
views
asked 6 months ago