Questions tagged with Elastic Load Balancing
Content language: English
Sort by most recent
Hello Team,
In EC2, I have deployed a flask application with port 8000. For security, I am converting the http requests to https requests using application load balancer and route 53.
I configured the security groups to allow only http (8000) and https (443) and source is 0.0.0.0/0. I have applied same security group to load balancer.
In Network ACL, I am allowing all traffic.
The issue is:
some unwanted/not configured IP address endpoints are hitting my application. lets say, I have configured the ec2 to 12.23.42.23 and configured domain requests (https://example.com/api/hit), but ec2 is allowing other IP addresses (32.43.23.23). I see many not configured IP address calls are hitting the application.
So, I am tried to restrict the Network ACL to allow only 8000 and 443. But no requests being reached to server.
Please help with the details what is the security group for ec2 and load balancer to be used. and also network ACL to allow only 12.23.42.23 and configured domain requests (https://example.com/api/hit)
I have the following architecture in mind. Does this require two separate ALBs (i.e. configure one of two ALBs to perform auth) or can a single ALB be configured to do cognito auth on specific routes?

I would like to implement an upload API to handle large files (i.e. 1GB+). The architecture that I have in mind looks like this (I am intentionally avoiding uploads to S3).

I know that some services like the API Gateway limit request payloads to a maximum of 10MB per request. Does ELB impose a similar limitation that would make the above architecture unworkable?
I have a AWS elastic beanstalk load balancer t2.medium environment set up recently. I noticed that there is a CPU spike event (last a few seconds), following with high latency stay at 80s forever and hanging - never auto recover until manual reboot. (see chart)

The new environment is a clone of our old instance (Tomcat 8.5 with Java 8 running on 64bit Amazon Linux/3.4.18). The CPU spike might be caused by a batch job, but why latency stay high 80s after CPU usage recover? This happened twice in 2 weeks. I checked the log, no other suspicious event.
The old instance (t2.small running the same code) had never behaved like this and never had latency this high number before. Can anyone give some hints?
According to https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-limits.html, the default Limit for Certificates per Application Load Balancers is 25. The site also states that it's adjustable, but I couldn't find any response as to what is the absolute maximum number of certificates.
We're currently planning a new application for existing customers, where each customer gets it's own subdomain and which should be delivered over TLS. So my question is: is there an actual limit as to how many certificates can be added to a single ALB? Just for completeness: there will be only one rule and one target group to handle all customer requests.
I've an ECS Fargate container app that server the API request over the public internet.
My understanding is that this API service container can be deployed on public subnet and that is configured with ALB DNS and target group.
As we can see target group redirect the traffic to private IP of the ECS task, I guess we don't need public IP to enabled when launching the task.
However when I attempt this on ECS task launch getting an error "Resourceinitializationerror: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post "https://api.ecr.eu-west-2.amazonaws.com/": dial tcp 52.94.53.88:443: i/o timeout"
If this is not workable and we need to enable public ip on the task launch, I'd prefer to restrict the public IP port access only to web service ALB for best security practice. Could someone suggest me the workable approach on this usecase pls? Thanks.
I have two AWS accounts.
1. Account A has several ECS services, an ALB with target groups targetting IPs and Ports on those ECS services, and a hosted zone with a Route53 Alias record tying an API URL (ex. api.example.com) to the ALB DNS name.
2. Now there is a requirement to have another ECS service, but in a separate account (account B) for security reasons. We still need to use the existing "api.example.com".
What options do I have to deploy the new ECS service to the new account B, but have its traffic routed through the ALB of account A so we can still use the same API URL? Is my best option VPC peering, PrivateLink, etc? I'm struggling trying to find a good example of this.
Also, account B does not have any ALB set up right now. Just an ECS service not exposed by any ALB target group. Could I potentially create a separate ALB in account B, add a target group that targets the new ECS service, and then somehow DNS my way into using the API URL configured in account A's hosted zone?
Thank you!
Hello,
I am using Elastic BeanStalk and I create minimum 3 ec2 instances with auto scale.I have one rest api as service.I am using mongo cloud atlas.And rest api get data from mongo cloud atlas.service and mongo is in the same region.
I am doing load test with task is 5 and concurrency is 200.I am watching ec2 instances cpus , load balancers requests and mongo db monitoring.Everything seems ok for example cpu usage on level %10 but some tests is fail.Sometimes %30 is success.How can I catch the reaoson of the fail tests?
Hi AWS community 👋
I've set up an Application Load Balancer - ECS stack and noticed the ALB is causing latency issues. Direct requests to the Fargate instance have a latency of ~15ms consistently, but requests to the ALB's DNS or public IP have a latency of ~170ms.
Strangely, measuring the latency with **Desktop Safari** and all the **mobile browsers** gives ~15ms, but not with **Desktop Chrome, curl or postman** (all ~170ms).
Another odd behavior I have observed is that the ALB exhibits a warm-up effect, where the latency even increases to over 2s after some idle time and converges to 170ms after 1~3 requests, but there is no such behavior with the Fargate instance.
Can you help me identify the cause of the latency issue and suggest potential solutions?
Edit: It appears that ALB may not be the cause of the issue, as the [ALB log](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html) shows that `request_processing_time`, `target_processing_time` and `response_processing_time` are all less than 1ms. What could be the source of the problem? I have no idea. ¯\\(°_o)/¯
Chrome (to ALB) >>

Chrome (to Fargate) >>

Safari >>

Curl >>

Hello Everyone,
I have one confusion in ALB pricing. What is Average connection duration (in second) in LCU pricing?
Is ALB pre warming need now for handling flash traffic. And to enable pre-warming in ALB.
Good morning,
I have an Elastic Beanstalk environment stuck updating. I switched the auto-scaling to zero (desired, min, max) and the instance has been removed. But the status is still "updating".
The last operation was to add a listener to the ELB through the EB UI to open port 443. I tried to remove manually the listeners from the ELB but nothing changed.
I can't deploy, abort the operation, or clone.
What should I do?