Questions tagged with Elastic Load Balancing
Content language: English
Sort by most recent
Route 53 A record with Load Balancer DNS not propagating
I´ve configured a Load Balancer but when adding A record on Hosted Zone, the DNS is not propagating. Let me explain my current configuration (Let´s say the domain is 'something.com' and security groups are allowing traffic, also rules on LightSail): 1. LightSail instance and VPC peered (AWS default VPC and LightSail VPC are in the same avaliability zones and currently peered). From now, this will be 'previous VPC' on followint points. 2. A target group pointing to private IP addres of LightSail instance (Type: IP Addresses, Network 'Other private IP address', previous VPC, HTTPS protocol and Healty state). 3. Load Balancer with certificate imported, Internet-Facing, IPv4, previous VPC, 2 subnets selected (including the one where the Light Sail instance belongs to). 4. Hosted Zone for 'something.com' with a DNS A record for 'dummy.something.com' record pointing to Load Balancer DNS. With Alias that redirect traffic to 'Classic Load Balancer and applications', same region and previously created Load Balancer. I´ve done this before to protect an OWASP JuiceShop and it worked perfectly. The difference with the current one are: 1. DNS zone on LightSail with A record for 'dummy.something.com' pointing to the instance public IP (I´m deleting that record when creating the one Route 53, the one on previous point 4), between others records type for 'something.com' (for example A record apidummy.something.com) 2. The hosted zone is NOT 'created by Route53 Registar'. After all of this and after create the DNS A record of point 4, the DNS does not propagate and application hosted on 'dummy.something.com' is not accessible (DNS error returned). What I´m doing wrong or missing? should I create a CNAME record on LightSail for 'dummy.something.com' resolving to Load Balancer DNS? should I register 'dummy.something.com' with route53? other completely different thing? Any help would be really appreciated.
APIGateway certificate error
Hi, I have the following setup. api.mydomain.com (Route53) -> API GW Rest API instance as an HTTP proxy -> ELB DNS Name -> ECS I can convert the Rest API to an HTTP API if required. When I make a call to the api.mydomain.com I get the following error through the Cloudwatch console: Execution failed due to configuration error: Host name '<ELB_DNS_NAME>' does not match the certificate subject provided by the peer (CN=mydomain.com) What is the root cause of it in detail and what is the best way to solve the problem? Is my approach correct? Any help appreciated, thanks.
[🚀Launch Announcement] - AWS Gateway Load Balancer launches Target Failover feature
Hello, ELB team is happy to announce that we just launched a new Target Failover feature that provides an option to define flow handling behavior for AWS Gateway Load Balancer. Using this option, customers can now rebalance existing flows to a healthy target, when the target fails or deregisters. This helps reduce failover time when a target becomes unhealthy, and also allows customers to gracefully patch or upgrade the appliances during maintenance windows. Launch Details: * This feature uses the existing ELB API/Console and provides new attributes to specify the flow handling behavior. You can use the existing modify-target-group-attributes API to define flow handling behavior using the two new attributes target_failover.on_unhealthy and target_failover.on_deregistration. * This feature does not change the default behavior and existing GWLBs are not affected. * The feature is available using API and AWS Console. * The feature is available in all commercial, GovCloud, and China regions. It will be deployed in ADC regions at a later date based on demand. * Customers should evaluate the effect of enabling this feature on availability and check with their third-party appliance provider documentation. * AWS appliance partners should consider taking following actions - (a) Partners should validate whether rebalancing existing flows to healthy target has implications on their appliance as it will start receiving the flow midway, i.e. without getting the TCP SYN. (b) Update public documentation on how this feature will affect their appliance. (c) Partner may use this capability to improve stateful flow handling on their appliances. Launch Materials: * Launch Blog - https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-gateway-load-balancer-target-failover-for-existing-flows/ * Feature Documentation - https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/target-groups.html#target-failover * Attribute Documentation - https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/target-groups.html#target-group-attributes Thank you!
Scope of encryption when running ECS on Nitro instances
If I have an ECS cluster running a single service with an ALB in front of that service, am I right in thinking that if the whole cluster is running on Nitro instances, the section of network between the ALB and an instance within a target group would NOT be encrypted? The Nitro encryption only works between instances in the cluster and not between the ALB to an instance? Multiple services in a cluster would need to be using e.g. Service discovery and going point to point between themselves rather than via an ALB in order to benefit from the network level Nitro encryption?
ALB Custom Stickiness options
I'm trying to figure out if Application Load Balancer has a way to support repeatably routing to specific targets within an autoscaling target group based on e.g. a header value or origin IP. Our environment generally involves many devices per customer, and our application would be more efficient if we could route all/most devices from one customer to a single target within the autoscaling group. I've been looking at ALB's stickiness options, but there doesn't seem to be anything that immediately fits this pattern - individual devices can only be made sticky against targets after their initial connection. Is there a way to achieve this using ALB, or should I be looking for a more configurable product for this? I appreciate that NLB can do ip-based stickiness, but we use quite a lot of application-level features in our routing.
Mapping Load Balancer Ports in the ECS Service
please understand that I am not good at English. Suppose the ECR has a repository called `AAA` and uses EXPOSE port `8080`. Create an ECS cluster and configure containers when defining tasks. Add port mapping information from the container, enter `8080`. Enter container port mapping information `8080`. You are about to create a cluster service. If you add a task that you just created here and set the load balancer type to Application, it will automatically write `AAA 8080:8080` in the select box of the container selection to load balance. But I want the host mapping to be `443` ports. **Is there any way I can set it to `8080:443`?**
NLB preserving client IP addresses in combination with NACL having source CIDR constraint
I have a VPC with two subnets, each subnet containing an EC2 instance accessible via port 80. There is a NACL associated with both subnets restricting inbound traffic to a certain source CIDR outside of AWS. An internet-facing NLB is configured to route traffic to the instances via instance id. If "preserve client ip addresses" is *disabled*, everything works fine, requests originating from the correct CIDR are reaching port 80. But if it is *enabled*, my requests are timing out. A solution is to add a rule to the NACL allowing inbound traffic from the VPC itself. This is in line with what the [documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#network-acls) says. But I don't understand why this is required only when preserving client ip addresses. It is *preserving* the source ip address, it should be covered by the original NACL. I guess the answer is something like "... because Hyperplane", but I would like to have a deeper understanding.
What is the role of ports in the target group of the application load balancer?
When setting up the load balancer, I understand that it consists of a listener port, a target group port, and an instance port (ip port). I think that the load balancer goes to the target group through the listener port and traffic is divided according to the instance port in the target group. Then, the target group port does not seem to affect the load balancing. What is the role of the target group port?
Problem on Application load balancer with rule: Health check only responds on the default rule
Hi everyone I have 3 microservices running on an **ECS cluster**. Each microservice is launched by a **Fargate task**. Each microservice runs in its own Docker container. * *Microservice A* responds on port 8083. * *Microservice B* responds on port 8084. * *Microservice C* responds on port 8085. My configuration consists of two public subnets, two private, an internet gateway and a NAT, as well as two security groups, one for fargate services and one for ALB. On the security groups I have enabled inbound traffic on all ports. I have defined a listner for the ALB that responds on port 80 and wrote some path-based rules to route requests to the appropriate target group (*every target group is a Target type*) :![Enter image description here](/media/postImages/original/IM8oFOWQXjQEuDjdKe3PeGgw) Only the health check of the target group that responds to the default rule responds ( but I suspect it all happens randomly) , and consequently only the service reachable on port 8083 works ![Enter image description here](/media/postImages/original/IMtOk5-EqJRrmxLa49ium6hg) The remaining target groups are **unreachable**. What you notice is that in the "*Registered Target"* section the assigned IP addresses change continuously. For example: ![![Enter image description here](/media/postImages/original/IMkdJ_RNqsTJazJ3J8j4foqw) Enter image description here](/media/postImages/original/IMCm7LLgy1QJKk0JsLC3XlGg) But every time IP assigned it generates a timeout. It can happen quite randomly that a certain IP address is registered correctly. These are the ECS configurations of one of the unresponsive services: ![Enter image description here](/media/postImages/original/IMOdt86JdpS_2paN_elspK5g) What is the problem and how can I solve it? Thank you. **UPDATE1** I tried to add a new instance for microservice A. For the new IP (10.0.0.137) the health check is not responding. After a few minutes, the provisioning of a new IP (10.0.0.151) appears and it is registered correctly: ![Enter image description here](/media/postImages/original/IMUcZubrfCRrGo-fpqYAvSJQ) **UPDATE2** It is really strange behavior. **All services are now connected correctly**, after several hours of failed attempts. It looks like an IP address assignment problem. Before finding the correct address, AWS makes several attempts with different IP addresses until it randomly finds the correct one. These are the CIDRs of my PRIVATE subnets * private_subnets = ["10.0.0.128/28", "10.0.0.144/28"] * public_subnets = ["10.0.0.0/28", "10.0.0.16/28"] While these are the IPs that connected successfully: 1. 10.0.0.136 (micorservice A istance1) 2. 10.0.0.151 (micorservice A istance2) 3. 10.0.0.153 (micorservice A istance3) 4. 10.0.0.152 (micorservice B) 5. 10.0.0.142 (Microservice C)