Questions tagged with Amazon EC2 Auto Scaling
Content language: English
Sort by most recent
Hello, i deployed REDCap with the cloudformation template from Vanderbilt University [https://github.com/vanderbilt-redcap/redcap-aws-cloudformation]. Issue i have is not able to upload file exceed more than 2M. i have to update the php.ini file so to increase some paramters like max_execution_time, max_file_uploads,max_file_size etc .....
i tried to dowload the zip file of the version of the application and then in the .ebxtensions directory add a config file but every time i re-deployed this my beanstalk crashed and revert back to the old version. Below is my custom config file
files:
"/etc/php.d/project.ini" :
mode: "000644"
owner: root
group: root
content: |
upload_max_filesize=64M
post_max_size=64M
Is there any step i missed ?
Thanks
We have enabled "Not protected from scale in" for the auto-scaling group but still the scale-in protection is getting enabled for instances launched.
**Context: **i've created an ELB and have connected to a target group which inturn is connect to an ASG
**ASG - Working:**
I could see that ASG is working fine aka it creates an instance automatically as per the scaling policy.
**Target Group - Working:**
I could also see that the target group is indicating the instance as Healthy
**ELB - Not Working:**
whereas, when I try to hit the ELB's public URL from a browser, I don't see it working, it fails with a timeout error and I've enabled logging for ELB but don't see the logs appearing in S3 buckets (although there's a file created by ELB, which means I've given proper access to it)
I literally don't know how else to track a request from ELB and I'm not seeing the logs generating in my application/instances as well.
but, when I try to hit the public URL of my instance, the application does work
any inputs would be helpful!
I have hosted my Ec2 instance in Mumbai Region, Which goes down every day for few hours. Giving Host Error 524. (Billing section is working fine)
Hello,
I'm trying to understand why an ECS cluster configured with a capacity provider for EC2 with no defined service or running tasks, an empty cluster then, has a CapacityProviderReservation at 100 at all times.
Following the explanation of https://aws.amazon.com/fr/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/
CapacityProviderReservation = M / N x 100
where N = Already running instances and M = Needed instances
In my case, there is no needed nor running instances because there is nothing to run so N and M are equal 0, so why is CapacityProviderReservation always at 100 ?
When there is only one server in the autoscaling group there is no issue, but if the max capacity is > 1 then when I start a task CapacityProviderReservation = 200 so 2 instances are started when only is needed.
If anyone can enlighten me I would be very grateful :)
Valentin
Hi experts,
We are designing to deploy a BI application in AWS. We have a default setting to repave the ec2 instance every 14 days which means it will rebuild the whole cluster instances with services and bring back it to last known good state. We want to have a solution with no/minimal downtime.
The application has different services provisioned on different ec2 instances. First server will be like a main node and rest are additional nodes with different services running on them. We install all additional nodes same way but configure services later in the code deploy.
1. Can we use asg? If yes, how can we distribute the topology? Which mean out of 5 instances, if one server repaves, then that server should come up with the same services as the previous one. Is there a way to label in asg saying that this server should configure as certain service?
1. Each server should have its own ebs volume and stores some data in it. - what is the fastest way to copy or attach the ebs volume to new repaves server without downtime?
2. For shared data we want to use EFS
3. for metadata from embedded Postgres - we need to take a backup periodically and restore after repave(create new instance with install and same service) - how can we achieve this without downtime?
We do not want to use customized AMI as we have a big process for ami creation and we often need to change it if we want to add install and config in it.
Sorry if this is a lot to answers. Some guidance is helpful.
First created a Launch Template with Auto Scaling Guidance (no Networking info except providing Security Group)
Next created Auto Scaling Group in the console. As the launch template is selected in the first step, it can be seen that the security group IDs is just a dash (-), which indicates that the Auto Scaling group is not taking the Security Group from the Launch Template.
Moreover, there is no way in this screen to override/correct the security group.
Confirmed this when the Auto Scaling Group creates the instances - those instances have the default security group instead of the one specified in the Launch Template.
Could this be a bug or a pebcak problem?
Hello,
I have a load balancer which as you know keeps the health check for the web app/website.
I have deployed nothing in my instance means no app/site so when anyone visits the Loadbalancer URL they see a 502 Bad gateway error which is fine.
and also in the target group, it shows that an instance has failed the health check but the thing is that the auto-scaling group is not terminating the failed health check instance and replacing it.
Below is the Cloudformation code
```
AutoScailingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
VPCZoneIdentifier:
- Fn::ImportValue: !Sub ${EnvironmentName}-PR1
- Fn::ImportValue: !Sub ${EnvironmentName}-PR2
LaunchConfigurationName: !Ref AppLaunchConfiguration
MinSize: 1
MaxSize: 4
TargetGroupARNs:
- Ref: WebAppTargetGroup
AppLoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
SecurityGroups:
- Ref: ApplicationLoadBalancerSecurityGroup
Subnets:
- Fn::ImportValue: !Sub ${EnvironmentName}-PU1
- Fn::ImportValue: !Sub ${EnvironmentName}-PU2
Tags:
- Key: Name
Value: !Ref EnvironmentName
Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref WebAppTargetGroup
LoadBalancerArn: !Ref AppLoadBalancer
Port: "80"
Protocol: HTTP
LoadBalancerListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Actions:
- Type: forward
TargetGroupArn: !Ref WebAppTargetGroup
Conditions:
- Field: path-pattern
Values: [/]
ListenerArn: !Ref Listener
Priority: 1
WebAppTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 10
HealthCheckPath: /
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: 8
HealthyThresholdCount: 2
Port: 80
Protocol: HTTP
UnhealthyThresholdCount: 5
VpcId:
Fn::ImportValue:
Fn::Sub: "${EnvironmentName}-VPCID"
```
We want to run containers on an ec2 fargate cluster that will run jobs of an arbitrary duration (30 minutes on average). We want to scale out based on active TCP connection metrics via the AWS loadbalancer (1 tcp connection contains stream that needs to be handled by 1 container). This seems possible with cloudwatch metrics. The question is how to **scale in** once the job is done. We do not want to stop a container when it's still processing an active connection. It could take hours for the job to finish (and that is OK).
Is there a way we can remove a container once it is done processing (aka no active tcp connection anymore?)
If the process inside the container stops, is it removed from the desired active container total?
Hi All, I have a situation, where the service A (spring cloud application on Tomcat) to be deployed on EC2 (auto scaling group) using CHEF (deployment time ~10 mins). The servers are behind the NLB (cross zone load balancing enabled, sticky session disabled).
Now the issue is, as soon as a server is brought in-service, the NLB passes the health check before the CHEF build completes; which means, the target becomes healthy but the service A deployment still in progress. This is causing an issue to other service B (running on different EC2 on diff ASG) which is trying to connect to service A.
The Service B connection to service A fails if the request from NLB landed on the EC2 of service A in question.
Since there is no option with TCP health check to set the health check path, one of the root causes I could think of is, as soon as Tomcat gets deployed, the NLB gets the health check response, which is sufficient to make the target healthy, whereas the service is still getting deployed on tomcat.
Is there a way to handle this situation? Except replacing NLB with ALB.
(*PS: application uses Spring Cloud Netflix patterns - Eureka, Config, zuul etc)
I have an ec2 instance of type t2.micro and I have a web service that communicates with services outside the server so it needs internet and I have noticed that internet connectivity is lost at least 1 time a month and I have to restart so that back to normal and I would like to solve that problem. It is worth mentioning that I have a load balancer for that server and a security certificate for my domain.
how do we extract tomcat8 jvm memory usage metric and send it to cloudwatch?