Questions tagged with Elastic Load Balancing
Content language: English
Sort by most recent
I have an instance running with ubuntu and a laravel application with apache, in this application I installed an SSL certificate. I'm using a load balance in my architecture, but when I try to access my application through DNS, I get a connection refusal message.
The load balance and instance security groups are allowing access to ports 443 and 80.

I am trying to set up OpenSearch in a private VPC subnet behind a Load Balancer in a public subnet. The load balancer endpoint is in turn placed in a Cloudfront distribution. Right now I am testing this with HTTP -- will try HTTPS once we are able to set up our DNS. After configuring the security groups to allow OpenSearch and the ALB to communicate, and after adding the listener/target group, I am able to connect to OpenSearch through the load balancer endpoint. However, if I try to access via the Cloudfront endpoint, I get a 504 Error: The Request Could Not Be Satisfied. I try pinging the ALB endpoint via curl and notice that it is taking 75 seconds to respond with 200-OK. So it seems that Cloudfront is not responding due to late responses from the Load Balancer. It always takes exactly 75 seconds -- except sometimes when I fire up the cluster, the first response comes back in a fraction of a second as it should, then on all subsequent attempts it takes 75 seconds. I am in Maryland and the cluster is set up in the Oregon region. I tried this with three progressively larger instances of OpenSearch and the compute power made no difference. I've been trying to figure this out for weeks -- any suggestions on what I am doing wrong? Thanks!
Dear Support ,
We are using App Runner to run our dockerized spring boot app and would want to expose a particular port at which the service responds to.
However the domain that exposed from the **App Runner** is without the port that has been configured so this means that it works on a default secure port ie **443**.
Is this a **bug** ? so this means that App Runner service is secured even without the **basic authentication** set up ? Please kindly clarify
Thanks and Regards
Thulsi Doss Krishnan
Hello !
I want to create an app that require a lot of computing power (an API who makes images with stable diffusion). So I’ll use EC2 instances to do the calculations. The entry point of my back-end will be an Amazon API Gateway, who’s only gonna handle a few requests only (like, 3), each with a very consistent (and known) workload. The number of user requests could greatly vary in a (relatively) short period of time (up and down).
What’s the best (and cost-effective) way to scale this workload ? I tried to look at "load balancer", but I didn’t found a good way to use it for this purpose. I was thinking about creating a SQS queue to store requests, and scale up my EC2 instances when too much requests stack up. It that a good idea ? If so, what’s the best way to do it ?
I’m all ears ! Thanks in advance.
I have tried creating an ECS cluster using both fargate and EC2 to use one of my prebuilt containers in ECR. I have done this using both the console and terraform code based on this blog: https://medium.com/avmconsulting-blog/how-to-deploy-a-dockerised-node-js-application-on-aws-ecs-with-terraform-3e6bceb48785, but I still face the target deregistration / target unhealthy. My container is on port 8000 for reference. Right now the container is hosted on EC2, but I'd like to use ECS for scalability. Is there a simple and fool proof guide I can follow for this?
I have Jenkins running on an EC2 with NGINX running on the same EC2 listening on port 80 forwarding to 8080 for Jenkins. In front of this I have an ALB listening to port 443 and a CERT setup.
When I go to https://jenkins.example.com. I can login then I get 400 Bad Request "The plain HTTP request was sent to HTTPS port" and the url changes to http://jenkins.example.com:443/loginError.
I tried adding in another listener on port 80 with a re-direct to 443. That did nothing. I even changed it to just print out a message but never got the message.
Any idea where I might be missing something?
We have an application deployed in EKS that dynamically registers ingress rules in ALB.
Each ingress rule maps to a distinct hostname on a common domain (eg `foo-001.example.com` `foo-002.example.com` etc).
At the moment we are hitting the ALB Target Group limit of 100 as each ingress rule is creating both an ALB rule *and* an ALB Target Group. We have had the rule limit increase to 200, but the Target Group limit cannot be changed.
Is there are way to reuse/share Target Groups when creating the EKS ingress objects?
We currently use the following annotation when creating the ingress object:
```
'alb.ingress.kubernetes.io/target-type': 'ip',
```
The documentation implies changing this to `instance` would then allow us to have one Target Group per k8s node the services are deployed to... but we aren't sure.
This is what we're reading: https://catalog.workshops.aws/eks-immersionday/en-US/services-and-ingress/targetgroupbinding
We are searching appropriate VPN implementation to provide access to applications behind Application Load Balancer (ALB) only for internal team.
We are using internet-facing ALB which exposes several applications like backend API (for CloudFront distribution) and others based on EC2 instances.
We have already implemented Client VPN with routing via NAT gateway with Elastic IP address and make filtering by ALB rules based on Host path (DNS provider: DNS records of applications are pointing to ALB) and IP address (Elastic IP address from NAT GW).
It means that our developers establish connection with Client VPN which has static outbound IP address. When they try to access applications, ALB checks Host path and IP address then proceed requests.
It works correct for full tunnel mode but not with split-tunnel.
Is there solution or additional configuration we have to setup to be able using split-tunnel?
Hi Team,
It would be much appreciated if someone could advise on the question below.
There is a ec2 in vpc1 (account 1) . When this connects to public facing ELB in vendors VPC2, does it go over internet or it stays on AWS global/private network ¿
fyi, no vpc peering is configured.
As per vpc faq, it says
===================
Q. Does traffic go over the internet when two instances communicate using public IP addresses, or when instances communicate with a public AWS service endpoint?
No. When using public IP addresses, all communication between instances and services hosted in AWS use AWS's private network. Packets that originate from the AWS network with a destination on the AWS network stay on the AWS global network, except traffic to or from AWS China Regions.
===================
Hi,
I am trying to use 3rd party services with my EC2 instances.
The 3rd party service has some security rules. one of them is that my instance's IP has to be whitelisted.
I am working with multiple instances that can be scaled over time and I don't want to set Elastic IP for each instance. and I don't want to register new IP every time I'm adding a new instance.
Is there a way to use a service (maybe proxy) that listen to all my instances and forward the outgoing request with the same IP?
I also believe it is more secure to put my outgoing requests behind a proxy.
Can I get an explanation/tutorial for doing that?
Thank you!
Asaf
Hi everyone!
I've been working with a Fargate app recently and I've managed to make everything work (LB, listeners, routing, CNAME with Route53, https and http protocols), but for one thing:
The app itself is just a website, bundled into a docker image and the ECS Task deploys it. It does not generally have any task running, it starts the task once a call to the website is received. If it receives a call while the task is not running, the user gets a 503 response from the website. As soon as that happens, the task spawns and takes about 5 to 7 seconds to actually get up and running, the website does respond after that and the target registers. After 15 seconds or so, it deregisters again (and again, anyone that comes to the website after that, receives that 503). So the loop begins again.
There's this other Fargate app, done by John Doe, same logics, that keeps the task alive for longer once it's up and running (so far I've tested it with pings every 3 minutes and keeps responding, still testing and from the logs I realise that I doesn't go down as soon as the response is given, it does reach an *idle* state whereas mine doesn't).
The issue is that the task I've defined in task definition takes about 5 to 7 seconds to spawn and the website shows a 503 response until it does. But once the task is up and running, the website responds correctly and shows the page I want it to show. The other app has the exact same configuration regarding idle timeout of the LB, target groups, inbound/outbound rules, scaling rules, and so on.
I don't think the problem is in the Load Balancer, nor in its target groups because they are configured correctly and the target group do register.
The only difference I've noticed is the behaviour of the task itself, once up and running, won't deregister after 15-30 seconds while mine does.
I need to know how to make that "task lifecycle" longer, the time the actual task lives. I've read somewhere that tasks can actually run for as long as we want. So my question is: How do I do that? How do I set up the "idle" time of a task in a Fargate app before it goes down again?
If the task is failing health checks, how do I troubleshoot health checks of the task without going into the target group thing? Could it be an issue with ECS deciding somehow that the task is failing health checks and so it takes it down again? If so, is there a way of telling ECS to keep it alive?
Thank you in advance!
I recently migrated an IIS Dot net application (lift & shift) from on-premise to the cloud.
How do I implement a health check (what should be the path pointing to?) for an IIS application in the Target Group? The target has 3 EC2 instances.