Questions tagged with Elastic Load Balancing
Content language: English
Sort by most recent
Hello,
I'm not a web developer. I created my AWS EB WebApp as Classic Load balancer. I'm now setting up Cloudfront distribution for CDN with a custom domain that I bought from AWS Route 53. My cloudfront is working, but it's not responding for POST request. When I read about it online, I think my aws eb webapp should be migrated to Application Load Balancer. Could you help please? - Haile
When Elastic Beanstalk auto-generates resources, NLB is created with Network mapping for subnets with "Assigned by AWS" IPv4 addresses.
How it would be possible to a associate Elastic IP to Beanstalk environment with Network Load Balancer for **inbound** traffic? *(This is not to be confused with [static "source" IP address](https://repost.aws/knowledge-center/elastic-beanstalk-static-IP-address) in Beanstalk)*
I reviewed [related CloudFormation resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticloadbalancingv2-loadbalancer-subnetmapping.html) to see if or how I can make use of them but I am not sure if this can be applicable for Elastic Beanstalk environments.
I have created 4 EC-2 instances. Three of them in us-east-1c and one in use-east-1d.I have created two target groups with two instances each. I have created a simple html page in each server using putty and created application load balancer with default http to target group 1.
But when I want to divert to target group 2 using path base listener it shows URL not found 404 error.
I have created index.html file in each server root directory /var/www/html. When using ALB DNS name it displays target group 1 server i.e. server 1 and server2.
In root directory of server 3 and server 4 I created a random folder like images. When I assigned listener path to /images and forwarded to target group 2 i.e. server 3 and server 4 the URL shows not found.
What mistake am I doing here? Kindly explain
Afternoon all ... if I am reading things correctly, a Network Load balancer has a 55k connection limit and as things stand right now I am hovering around a 52k active flow count connections.
So if I am correct in the above, I can't simply split that by changing a DNS endpoint, so I was wondering what is considered best practice? The current is a single DNS name with a CNAME to the NLB, so I can't just put a second NLB and have 2 values in Route 53 (that I know of) so what is a good solution to this?
Thanks much

I am trying to get a handle on how to you define an ALB, its Listeners, Target group and Security groups in a CF Template. So I wrote out this sudo code listing. Is this correct if the ALB is Internal, listening on port 443 for traffic and sending that traffic to port 80 on the instance webserver?
* ALB
* Properties:
* Type: internal
* Listener: 80
* Listener: 443
* Subnets
* SecurityGroups
* LBAttributes
* ALBListener80
* Properties:
* Reference: ALB
* Port: 80
* Redirect rule to port 443
* ALBListener443
* Properties:
* Reference: ALB
* Port: 443
* SSL Policy
* Certificate
* Forward rule to ALBTarget80
* ALBTarget80
* Properties:
* Port: 80
* VPCid
* TargetgroupAttributes
* Registered instance(s)
* Healthcheck
* Check port 80
* ALBSecurityGroup
* Ingress rules:
* Allow port 80 from VPC CIDR
* Allow port 443 from VPC CIDR
* Egress rules:
* Allow port 80 to InstanceSecurityGroup
* Allow port 443 to InstanceSecurityGroup
* Allow All traffic to 127.0.0.1/32
* InstanceSecurityGroup
* Ingress rules:
* Allow port 80 from VPC CIDR
* Allow port 443 from VPC ALBSecurityGroup
* Egress rules:
* Allow all to 0.0.0.0/0
Am I looking at this correctly?
I need to forward traffic received from Site to Site VPN to another VPN, but SNATing is required before packets can be sent through ipsec tunnel.
I am thinking of using PrivateLink with Private NAT Gateway as a target for the Network Load Balancer.
Is Private NAT Gateway as NLB target a supported configuration?
If yes, then how to set up health checks for target group?
When I create an internal NLB and attach it to target instances in a set of private subnets, the NLB is assigned private IPs from these subnets. Are these IPs subject to change over the NLBs lifetime?
I'm aware that a public-facing NLB can be given a static elastic IP, but this is strictly about an internal NLB.
so i have a fargate proxy service for which public ip is disabled , i have configure load balancers, nat and internet gateway for the service because the servie was in private subnet , so i did all the configurations ,now i am trying to access the service using cloud front distribution wiht behaviour as token - loadbalancer origin , the service is working fine sometimes and sometimes giving me 504 cloud front error as this
"504 ERROR
The request could not be satisfied.
CloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront)
Request ID: RCHf8wHj1tiIdHY1XGCIjAYl2PClTVwR4F3k5hzUbiTsEsfbb0-Oug=="
i have configured the security group of the load balancer to allow all the traffic from internet and same with the secuirty group of my fargate service , for testing purpose , i have also configured a nat gateway , sothat the service can access the internet because it forward the service to another service on the internet and i have also configured an internet gateway so that the service on the internet can talk to the fargate service.
how should i resolve this error , can this be a cloud front specific error and if so how should i resolve it ???.
also in the fargate service logs i am not able to see any issue neither in the load balancer logs ,as well it is showing that the traffic is being forwarded.
Hi, I'm a newbie taking the AWS Cloud Architect course on Coursera and currently on Course 1, Module 4, Exercise 7. I believe I followed all the instructions to a T and have tried it twice now and continue to get stuck on the following Task within the assignment:
Task 5: Testing the application
In this task, you will stress-test the application and confirm that it scales.
Return to the Amazon EC2 console.
In the navigation pane, under Load Balancing, choose Target Groups.
Make sure that app-target-group is selected and choose the Targets tab.
You should see two additional instances launching.
Wait until the Status for both instances is healthy.
My Status never goes to "healthy" state and keeps failing, "Unhealthy", "Draining" (Target deregistration is in progress)
Can someone tell me why this would happen and where i should check to correct this?
Thank you in advance.
I have an NLB -> Target Group -> Targets setup.
I added a new target which is healthy. However, the traffic distribution is not even after 4 hours.
I came across a couple of posts indicating possibilities around:
- Target IP caching
- Long-lived TCP connections
As I don't have control over the client, is there a way I can reach a balanced distribution?
My supervisor asked this question but I'm not sure how to scientifically measure it, currently I make several requests to a HTTP service and measure the `time_starttransfer` cURL statistic with the domain name resolving to the IP address of the ELB and the EC2 instance and subtract the numbers to provide an answer but I'm not sure this is the proper way to do so.
I also checked the CloudWatch dashboard and unable to found similar metrics, is there any?
I would like answers for either the NLB(OSI layer 4) and ALB(OSI layer 7), thanks in advance!
Hi,
I am using nlb for serving rtmp connections. Targets of nlb are multiple nodes in eks cluster and on nodes there are nginx-rtmp pods. When i stream multiple streams, i am getting connection dropped at client side, and getting "drop idle stream" log on nginx-rtmp. my idle timeout configuration on nginx-rtmp is 30 sec. I am using ec2 instances to generating load which have 5GB bandwidth.
I am not able to found why this is happening. Multiple connections dropping in a single second. and sometimes all of them are on same node.
Also when i am checking NLB access logs i found only two ips in target ip and i am not able to found both ip on any pod or node.