By using AWS re:Post, you agree to the Terms of Use
/Elastic Load Balancing/

Questions tagged with Elastic Load Balancing

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

ApplicationLoadBalancedFargateService with load balancer, target groups, targets on non-standard port

I have an ECS service that exposes port 8080. I want to have the load balancer, target groups and target use that port as opposed to port 80. Here is a snippet of my code: ``` const servicePort = 8888; const metricsPort = 8888; const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDef'); const repository = ecr.Repository.fromRepositoryName(this, 'cloud-config-server', 'cloud-config-server'); taskDefinition.addContainer('Config', { image: ecs.ContainerImage.fromEcrRepository(repository), portMappings: [{containerPort : servicePort, hostPort: servicePort}], }); const albFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'AlbConfigService', { cluster, publicLoadBalancer : false, taskDefinition: taskDefinition, desiredCount: 1, }); const applicationTargetGroup = new elbv2.ApplicationTargetGroup(this, 'AlbConfigServiceTargetGroup', { targetType: elbv2.TargetType.IP, protocol: elbv2.ApplicationProtocol.HTTP, port: servicePort, vpc, healthCheck: {path: "/CloudConfigServer/actuator/env/profile", port: String(servicePort)} }); const addApplicationTargetGroupsProps: elbv2.AddApplicationTargetGroupsProps = { targetGroups: [applicationTargetGroup], }; albFargateService.loadBalancer.addListener('alb-listener', { protocol: elbv2.ApplicationProtocol.HTTP, port: servicePort, defaultTargetGroups: [applicationTargetGroup]} ); } } ``` This does not work. The health check is taking place on port 80 with the default URL of "/" which fails, and the tasks are constantly recycled. A target group on port 8080, with the appropriate health check, is added, but it has no targets. What is the recommended way to achieve load balancing on a port other than 80? thanks
1
answers
0
votes
6
views
asked 16 days ago
0
answers
0
votes
1
views
asked 17 days ago

Instances can't reach classic ELB in VPC after ENI change

Four or five times in the past 6-8 weeks, we've had situations where one of our ec2 instances (running CentOS) cannot reach the private IP address of a classic ELB. I believe this is due to scaling events (or something else causing replacement of ELB components) happening on the ELB. From what I see in cloud trial, the network interface is replaced with one having the same ip address but a different mac address. Sometimes, but not all the time, the old mac address gets stuck in the instance's arp cache (in REACHABLE state), preventing the instance from communicating with the ELB causing drastic issues for our application. If I manually delete the entry from the arp cache, things start working again. This is happening across different environments, so multiple subnets, multiple ELBs and multiple ec2 instances. These environments and components have been running for years without seeing this issue before. The only network config change we've recently made is to disable jumbo frames earlier this year, but don't see how that would impact this. Any ideas how to fix this? Thanks EDIT: this happened again today and I was able to more closely examine things. The new ENI is actually re-using an ip address that had been used over a month prior. The old entry for said ip address is still listed in the arp cache with the prior MAC address, despite not being used for about four weeks. This explains why it's starting to happen more frequently, as the chance that an ip address gets re-used increases as new ENIs are created for the ELBs. It's a /26 subnet so not a lot of addresses to choose from.
1
answers
0
votes
2
views
asked 24 days ago

Horizontal Scaling concerns, SSL issue with NLB

note: I'm new to scaling and firstly seeking advice on the best practices for horizontal scaling **I have the following setup:** *EC2 Instances <-> ASG(created from Launch template) -> TG <-> ALB <-> TG <-> NLB* Traffic flows through NLB to ALB and finally to EC2 instances configured via ASG. note: I'm assuming the above setup is the best one to go with horizontal scaling, if not please let me know. the above setup works fine for HTTP whereas when I try to configure HTTPS, I don't see options to do so. Issue1: Target Group(TG) doesn’t allow to create one with Load Balancer type with TLS port: 443 but allows only TCP: port 80, **Question1: **how else should I redirect HTTPS traffic to ALB? note: I need NLB because ALB doesn't provide Static IPs **Question2:** wrt Static IPs: NLB doesn't allow <2 AZs which means I need to have 2 Static IPs linked to my domain? any inputs would be really helpful! **Update1:** I've configured like below: In ALB listeners: HTTP(80) gets redirected to HTTPS HTTPS(443) gets forwarded to ASG In NLB listeners: HTTP(80) gets forwarded to ALB note: ALB's public URL is added to my domain(sample-alb.domain.com) NLB's public URL is added to my domain(sample-nlb.domain.com) SSL works fine if the user enters by hitting sample-alb.domain.com whereas if the user enters by hitting sample-nlb.domain.com, it always fails with "ERR_CERT_INVALID" any inputs on why this fails? **Update2:** I've got the answer to my Issue1/Question1 on how to redirect HTTPS traffic to ALB from here: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/application-load-balancer-target.html#configure-application-load-balancer-target > **Listeners and routing** > For Listeners, the default is a listener that accepts TCP traffic on port 80. Only TCP listeners can forward traffic to an Application Load Balancer target group. Keep the listener protocol set to TCP, but you can modify the port as required. > > This setup allows you to use HTTPS listeners on the Application Load Balancer to terminate the TLS protocol. so, I created a TG with TCP port 80 and listener to NLB, which redirects to ALB. (say for ex my NLB's public URL is 'nlb34323.amazonaws.com') now, when I hit my NLB's public URL with 'http://nlb34323.amazonaws.com', it does get redirected to 'https://nlb34323.amazonaws.com', but eventually fails with a timeout error. note: whereas when I hit ALB's public URL, it is working fine does it have anything to do with TLS termination as mentioned in the above documentation: > This setup allows you to use HTTPS listeners on the Application Load Balancer to terminate the TLS protocol. what am I doing wrong here?
2
answers
0
votes
7
views
asked a month ago

Problem receiving IP 127.0.0.1 at service startup instead of local IP

**Context:** We've got a number of load balanced web servers running on Windows OS in AWS using C# .NET (5). We have a web server application as well as a Windows Service running on the same machine and we have problems with logging from the Windows Service. **Problem Description**: Since we have many servers running load balanced, we name the log stream with the private IP number in order to distinguish which machine that potentially has problems. This private IP is extracted at startup of the application (for both the Windows Service and the Web Server.) This is usually sucessfull, but yesterday we had an incident when one Windows Service log stream was labeled with 127.0.0.1 instead of the local IP number. Eventually I was able to pinpoint which server it was, restarted the windows service, which made the private IP number appear instead in the new log stream name. **?: Suggested reason with possible solution:** I'm guessing this is a race condition error. The machine has not received it's private IP number yet by AWS network before our service asked for it. **If so we can wait for the real IP to appear just to make sure we get the right IP number in our log. ** I have three question related to this: **Questions:** 1. **Do you see any other reason than the one I suggested why the IP number 127.0.0.1 appears? ** 2. ** Is there a better solution available than the one I suggested?** 3. **Is there a way, using an AWS API of some sort to get hold of the public IP for the server?** Here's the code how we extract the private IP address in this context: ``` var hostName = System.Net.Dns.GetHostName(); var ipAddresses = System.Net.Dns.GetHostAddresses(hostName); var ipv4Address = ipAddresses.FirstOrDefault(ip => ip.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork); ```
2
answers
0
votes
14
views
asked a month ago

ALB: unexpected 504 timeouts.

Hello, we're running an application with the following setup: ALB -> TargetGroup -> ECS service The ECS service is running on Fargate and it is a web-server. The ALB timeout is set to 20 seconds, and the Service's idle timeout is set to 120 seconds. We've confirmed that the ECS service keeps connections open for at least 120 seconds. We've also configured request logging on this ALB and we've been querying it with Athena. However, it seems like the ALB still issues 504s to some requests immediately, we regularly see log entries like these: ``` # type time elb client_ip client_port target_ip target_port request_processing_time target_processing_time response_processing_time elb_status_code target_status_code received_bytes sent_bytes request_verb request_url request_proto user_agent ssl_cipher ssl_protocol target_group_arn trace_id domain_name chosen_cert_arn matched_rule_priority request_creation_time actions_executed redirect_url lambda_error_reason target_port_list target_status_code_list classification classification_reason 1 h2 2022-04-05T12:29:35.833859Z app/ProdV2PublicApi/0a57bb243c2024e7 REDACTED 60232 10.42.17.243 18000 0.001 -1.0 -1.0 504 - 307 202 PUT https://apibeta.centralapp.com:443/api/v2/inbox/table-booking/request-status?urn=urn%3Acentralapp%3Acompany%3A9211291968452165724 HTTP/2.0 Mozilla/5.0 (Windows NT 6.0; rv:52.0) Gecko/20100101 Firefox/52.0 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 arn:aws:elasticloadbalancing:eu-west-1:899041400740:targetgroup/ProdV2-Quasar-ECS/3ee8cde1808fdb22 Root=1-624c35f3-5852eff9590d1fd5285bc6e7 apibeta.centralapp.com arn:aws:acm:eu-west-1:899041400740:certificate/7f1122d9-2470-46d6-b28b-53a48c99bdd1 5 2022-04-05T12:28:35.831000Z waf,forward - - 10.42.17.243:18000 - - - ``` From the backend logs, it seems to us that the request never really gets to the ECS service, or from the ECS service's standpoint, the connection gets prematurely terminated by the ALB. For the premature termination, we see logs in our backend like these: ``` [CRITICAL] [05/Apr/2022:10:18:47 +0000] ["Quasar/###WARP###"] Exception on request:Just Request { requestMethod = "POST" , httpVersion = HTTP/1.1 , rawPathInfo = "/api/v2/distributor/companies/d-sales/leads" , rawQueryString = "?id=centralapp_saas&page=0" , requestHeaders = [ ( "X-Forwarded-For" , "217.111.215.151" ) , ( "x-amzn-tls-version" , "TLSv1.2" ) , ( "x-amzn-tls-cipher-suite" , "ECDHE-RSA-AES128-GCM-SHA256" ) , ( "X-Forwarded-Proto" , "https" ) , ( "X-Forwarded-Port" , "443" ) , ( "Host" , "apibeta.centralapp.com" ) , ( "X-Amzn-Trace-Id" , "Root=1-624c1784-223215ba597701b31758046f" ) , ( "Content-Length" , "40" ) , ( "sec-ch-ua" , "" Not A;Brand";v="99", "Chromium";v="99", "Google Chrome";v="99"" ) , ( "content-type" , "application/json; charset=UTF-8" ) , ( "sec-ch-ua-mobile" , "?0" ) , ( "user-agent" , "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.84 Safari/537.36" ) , ( "sec-ch-ua-platform" , ""macOS"" ) , ( "accept" , "*/*" ) , ( "origin" , "https://beta.centralapp.com" ) , ( "sec-fetch-site" , "same-site" ) , ( "sec-fetch-mode" , "cors" ) , ( "sec-fetch-dest" , "empty" ) , ( "referer" , "https://beta.centralapp.com/" ) , ( "accept-encoding" , "gzip, deflate, br" ) , ( "accept-language" , "fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7" ) , ( "x-amzn-waf-name" , "ACL_Public_Resources_Default" ) ] , isSecure = False , remoteHost = 10.42.17.44:12692 , pathInfo = [ "api" , "v2" , "distributor" , "companies" , "d-sales" , "leads" ] , queryString = [ ( "id" , Just "centralapp_saas" ) , ( "page" , Just "0" ) ] , requestBody = <IO ByteString> , vault = <Vault> , requestBodyLength = KnownLength 40 , requestHeaderHost = Just "apibeta.centralapp.com" , requestHeaderRange = Nothing } Exception was:Warp: Client closed connection prematurely ``` We've also tried setting things up without an ALB and we've not been able to reproduce this issue locally, so we're pretty suspicious of what the ALB is doing here. What should be our next course of action on this issue? We're also noticing that the ALB tries to keep some connections open longer than the timeout: ``` [DEBUG] [05/Apr/2022:12:58:32 +0000] ["Quasar/###WARP###"] Socket: 10.42.17.105:39070 was open for: 90.897913261s [DEBUG] [05/Apr/2022:12:58:35 +0000] ["Quasar/###WARP###"] Socket: 10.42.2.83:56654 was open for: 107.734655922s [DEBUG] [05/Apr/2022:12:58:38 +0000] ["Quasar/###WARP###"] Socket: 10.42.2.83:56552 was open for: 129.749085682s [DEBUG] [05/Apr/2022:13:14:17 +0000] ["Quasar/###WARP###"] Socket: 10.42.17.105:41128 was open for: 625.691958838s ``` Why?
0
answers
0
votes
5
views
asked a month ago
1
answers
0
votes
13
views
asked 2 months ago

Built a dynamic website using Wordpress hosted on a 3-tier architecture

I created my presentation-tier ( Web layer) with 3 public subnets containing one EC2 each and use an internet facing ELB to distribute traffic to all of them. I also install Apache to all of the instances. The Elb healthcheck is healthy and so far everything is working. On my Application layer, I created 3 private subnets containing one EC2 each and use an internal facing ALB to distribute traffic to all of them. My Alb receives traffic only from my Web-servers, and I installed Wordpress on all 3 of them ( The script to install Wordpress also include Apache and MySQL). The Alb HealthCheck says that healthcheck failed and the reason being " unhealthy threshold 2 consecutive health check failures". I also created a NAT gateway for these application-servers. I created my dababase on the batabase-layer with its security group that allow traffic only from App-servers through port 3306. From my understanding of a 3-tier architecture, they are all connected to one another through the security group and even the route table. Since I can use session manager to connect to all my Web-servers and App-servers, I would like to believe that my security groups ports are "ok". Here is their flow: INTERNET-->Internet facing ELB-SG-->Web-SG--> Internal facing ALB-SG-->App-SG-->DB-SG. The flow is unsecure using Http (80). 1-How do I troubleshoot "Unhealthy threshold 2 consecutive health check failures?" 2- How do I built my application so that it will be accessible using only the DNS name of the Internet facing ELB?
2
answers
0
votes
6
views
asked 2 months ago

EC2 instance can’t access the internet

Apparently, my EC2 instance can’t access the internet properly. Here is what happens when I try to install a Python module: `[ec2-user@ip-172-31-90-31 ~]$ pip3 install flask` `Defaulting to user installation because normal site-packages is not writeable` `WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fab198cbe10>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/flask/` etc. Besides, inbound ping requests to instances the Elastic IP fail (Request Timed Out). However, the website that is hosted on the same EC2 instance can be accessed using both http and https. The security group is configured as follows: the inbound rules are | Port range | Protocol | Source | | -------- | -------- | ---- | | 80 | TCP |0.0.0.0/0 | | 22 | TCP |0.0.0.0/0 | | 80 | TCP |::/0 | | 22 | TCP |::/0 | | 443 | TCP |0.0.0.0/0 | | 443 | TCP |::/0 | the outbound rules are | IP Version | Type | Protocol | Port range | Source | | ----------- | --------- | -------- | ------- | ------ | | IPv4 | All traffic | All | All | 0.0.0.0/0 | The ACL inbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ---- | -------- | ---------- | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443)| TCP (6) | 443 |0.0.0.0/0 | Allow | | All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | and the outbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ------- | -------- | ---------- | | Custom TCP | TCP (6) | 1024 - 65535 | 0.0.0.0/0 | Allow | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443) | TCP (6) | 443 |0.0.0.0/0 | Allow | |All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | This is what the route table associated with the subnet looks like: | Destination | Target | Status | Propagated | | ---------- | -------- | -------- | ---------- | | 172.31.0.0/16 | local | Active | No | | 0.0.0.0/0 | igw-09b554e4da387238c | Active | No | (no explicit or edge associations). As for the firewall, executing `sudo iptables –L` results in `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain FORWARD (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` and `sudo iptables -L -t nat` gives `Chain PREROUTING (policy ACCEPT)` `target prot opt source destination` `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` `Chain POSTROUTING (policy ACCEPT)` `target prot opt source destination` What am I missing here? Any suggestions or ideas on this would be greatly appreciated. Thanks
2
answers
0
votes
19
views
asked 2 months ago

WAF blocking requests because of the ELB cookie values

Hi. I've noticed that the WAF **AWSManagedRulesCommonRuleSet** is **BLOCKING** (or COUNTING) legitimate requests because it matches the value of the **Elastic Load Balancer** cookie ("AWSALBTG") as a **false positive** matched by the rule **CrossSiteScripting_COOKIE** This is an example request that I extracted from WAF cloudwatch logs (only the relevant info): ``` httpRequest.headers.13.name: cookie httpRequest.headers.13.value: AWSALBTG=0naHdSsqK2TVnPXcAgo8cGqiA0X1v/4rqyWrE/OsL7eubnXAm8tJRmtFzcv5XbAmDVq6UpKw2ZY0BHcOMwuQLRh7lU3TMoHbHnA00gY2R+yG/4vtzy2meQptVHelSdfnAPR5heRTALuqaHUf/oNyw1kZibZHTTkzpONuiJZkpUIr2pVVqsQ=; AWSALBTGCORS=0naHdSsqK2TVnPXcAgo8cGqiA0X1v/4rqyWrE/OsL7eubnXAm8tJRmtFzcv5XbAmDVq6UpKw2ZY0BHcOMwuQLRh7lU3TMoHbHnA00gY2R+yG/4vtzy2meQptVHelSdfnAPR5heRTALuqaHUf/oNyw1kZibZHTTkzpONuiJZkpUIr2pVVqsQ=; AWSALB=zyyDqgOFJzOv2HVSswKA0mw8yNNjHrAyJkhe7SRNFzOJSD6jFX6+5/T8ELUvvHIYeKW0XuxPDTBTG0gZO3d2FSCohf1jHsk2mDmTkoOh7BZCQKTmtJn4X4jbDDjL; ..... nonTerminatingMatchingRules.0.action: COUNT nonTerminatingMatchingRules.0.ruleId: AWS-AWSManagedRulesCommonRuleSet nonTerminatingMatchingRules.0.ruleMatchDetails.0.conditionType: XSS nonTerminatingMatchingRules.0.ruleMatchDetails.0.location: HEADER nonTerminatingMatchingRules.0.ruleMatchDetails.0.matchedData.0: oNyw1kZibZHTTkzpONuiJZkpUIr2pVVqsQ nonTerminatingMatchingRules.0.ruleMatchDetails.0.matchedData.1: ; ``` As you can see, the "**matchedData**" field contains a string ("oNyw1kZibZHTTkzpONuiJZkpUIr2pVVqsQ") that is inside the AWSALBTG cookie value generated by the ELB. This means that currently we can't use WAF and ELB together because it is blocking legitimate requests because of the ELB cookie. Am I correct or missing something? Is there any way to avoid this?
0
answers
0
votes
4
views
asked 2 months ago
1
answers
0
votes
9
views
asked 2 months ago

Network Load Balancer stickiness seems to fail sometimes

We have a SignalR javascript client connecting to .net core hub, hosted in AWS. Both client and server use version 6. More than one backend server may exist, so there is an internet facing Network Load Balancer forwarding the traffic to the backend servers. The NLB is configured with these options: - Stickiness - Preserve client IP addresses Most of the time, everything works great: the negotiation and the connection upgrade. Sometimes, however, something strange happens: the negotiation fails (error in WebSocket transport), then the client tries again with another transport (SSE). This transport also fails and, while retrying, the client hits the other host, starting the negotiation again. Finally, the connection succeeds. All this process takes no more than 5 seconds. This was happening in our clients, from outside, so we set up an isolated environment to debug this situation, with the NLB and 2 backend hosts. This is internal, so no one else is connecting, for sure, so there is no chance hosts are overloaded. We are completely sure our IP does not change while the test is being done. We enabled client and server debug logs, shown below. This still happens sometimes, no matter the host that we hit on the first attempt. We know that we can configure the client to skip the negotiation, but that will make us lose about 10% of our clients, because we will be limited to use the WebSockets transport. From the logs, IP stickiness seems to be failing somehow... What is misconfigured in our setup? How can the negotiation fail if just one client is connecting and the IP does not change? What else can we configure in the AWS NLB to ensure the IP stickiness? Thanks in advance! **When the connection succeeds at the first attempt** ``` Client logs Debug: Starting connection with transfer format 'Text'. Debug: Sending negotiation request: https://<server>... Debug: Selecting transport 'WebSockets'. Trace: (WebSockets transport) Connecting. Information: WebSocket connected to wss://<server>... Debug: The HttpConnection connected successfully. Server logs [DBG] New connection r3Gv5PrBgTA2T6lijqwTrA created. [DBG] Sending negotiation response. [DBG] Establishing new connection. [DBG] Socket opened using Sub-Protocol: 'null'. [DBG] OnConnectedAsync started. [DBG] Found protocol implementation for requested protocol: json. [DBG] Completed connection handshake. Using HubProtocol 'json'. ``` **When the connection fails at the first attempt** ``` Client logs Debug: Starting connection with transfer format 'Text'. Debug: Sending negotiation request: https://<server>... Debug: Selecting transport 'WebSockets'. Trace: (WebSockets transport) Connecting. WebSocket connection to 'wss://<server>...' failed: Information: (WebSockets transport) There was an error with the transport. Error: Failed to start the transport 'WebSockets': Error: WebSocket failed to connect. The connection could not be found on the server, either the endpoint may not be a SignalR endpoint, the connection ID is not present on the server, or there is a proxy blocking WebSockets. If you have multiple servers check that sticky sessions are enabled. Debug: Selecting transport 'ServerSentEvents'. Debug: Sending negotiation request: https://<server>... Trace: (SSE transport) Connecting. Information: SSE connected to https://<server>... Debug: The HttpConnection connected successfully. Trace: (SSE transport) sending data. String data of length 32. POST https://<server>... 404 (Not Found) Debug: HttpConnection.stopConnection(undefined) called while in state Disconnecting. Error: Connection disconnected with error 'Error: No Connection with that ID: Status code '404''. Debug: Starting connection with transfer format 'Text'. Debug: Sending negotiation request: https://<server>... Debug: Selecting transport 'WebSockets'. Trace: (WebSockets transport) Connecting. Information: WebSocket connected to wss://<server>... Debug: The HttpConnection connected successfully. Server logs (server 1) [DBG] New connection _cm5IaOtqY7tD7suKOb08Q created. [DBG] Sending negotiation response. (1) [DBG] New connection GuXhVydEzL-8xXcSxibysA created. [DBG] Sending negotiation response. (2) [DBG] Establishing new connection. [DBG] OnConnectedAsync started. [DBG] Failed connection handshake. (server 2) [DBG] New connection RjoZW-BKBNMOa2UBW9yo-g created. [DBG] Sending negotiation response. [DBG] Establishing new connection. [DBG] Socket opened using Sub-Protocol: 'null'. [DBG] OnConnectedAsync started. [DBG] Found protocol implementation for requested protocol: json. [DBG] Completed connection handshake. Using HubProtocol 'json'. ```
1
answers
0
votes
25
views
asked 3 months ago

HTTP API GW + API VPC Link + Cloudmap + Fargate - How does it load balance

I am using an infrastructure setup as described in the title. This setup is also somewhat shown in this picture: https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2021/02/04/5-CloudMap-example.png In the official AWS blog here: https://aws.amazon.com/blogs/compute/configuring-private-integrations-with-amazon-api-gateway-http-apis/ the following is stated about using such setup: > As AWS Cloud Map provides client-side service discovery, you can replace the load balancer with a service registry. Now, connections are routed directly to backend resources, instead of being proxied. This involves fewer components, making deployments safer and with less management, and reducing complexity. My question is simple: What load balancing algorithm does HTTP API GW use when distributing traffic to resources (the Fargate tasks) registered in a service registry? Is it round-robin just as it is with ALB? Only thing I was able to find is this: > For integrations with AWS Cloud Map, API Gateway uses DiscoverInstances to identify resources. You can use query parameters to target specific resources. The registered resources' attributes must include IP addresses and ports. API Gateway distributes requests across healthy resources that are returned from DiscoverInstances. https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-private.html#http-api-develop-integrations-private-Cloud-Map
2
answers
0
votes
19
views
asked 3 months ago

Hi AWS, only testing

Centuries ago there lived-- "A king!" my little readers will say immediately. No, children, you are mistaken. Once upon a time there was a piece of wood. It was not an expensive piece of wood. Far from it. Just a common block of firewood, one of those thick, solid logs that are put on the fire in winter to make cold rooms cozy and warm. I do not know how this really happened, yet the fact remains that one fine day this piece of wood found itself in the shop of an old carpenter. His real name was Mastro Antonio, but everyone called him Mastro Cherry, for the tip of his nose was so round and red and shiny that it looked like a ripe cherry. As soon as he saw that piece of wood, Mastro Cherry was filled with joy. Rubbing his hands together happily, he mumbled half to himself: "This has come in the nick of time. I shall use it to make the leg of a table." He grasped the hatchet quickly to peel off the bark and shape the wood. But as he was about to give it the first blow, he stood still with arm uplifted, for he had heard a wee, little voice say in a beseeching tone: "Please be careful! Do not hit me so hard!" What a look of surprise shone on Mastro Cherry's face! His funny face became still funnier. He turned frightened eyes about the room to find out where that wee, little voice had come from and he saw no one! He looked under the bench--no one! He peeped inside the closet--no one! He searched among the shavings--no one! He opened the door to look up and down the street--and still no one! "Oh, I see!" he then said, laughing and scratching his Wig. "It can easily be seen that I only thought I heard the tiny voice say the words! Well, well--to work once more." He struck a most solemn blow upon the piece of wood. "Oh, oh! You hurt!" cried the same far-away little voice. Mastro Cherry grew dumb, his eyes popped out of his head, his mouth opened wide, and his tongue hung down on his chin. As soon as he regained the use of his senses, he said, trembling and stuttering from fright: "Where did that voice come from, when there is no one around? Might it be that this piece of wood has learned to weep and cry like a child? I can hardly believe it. Here it is--a piece of common firewood, good only to burn in the stove, the same as any other. Yet--might someone be hidden in it? If so, the worse for him. I'll fix him!" With these words, he grabbed the log with both hands and started to knock it about unmercifully. He threw it to the floor, against the walls of the room, and even up to the ceiling. He listened for the tiny voice to moan and cry. He waited two minutes--nothing; five minutes--nothing; ten minutes--nothing. "Oh, I see," he said, trying bravely to laugh and ruffling up his wig with his hand. "It can easily be seen I only imagined I heard the tiny voice! Well, well--to work once more!" The poor fellow was scared half to death, so he tried to sing a gay song in order to gain courage. He set aside the hatchet and picked up the plane to make the wood smooth and even, but as he drew it to and fro, he heard the same tiny voice. This time it giggled as it spoke: "Stop it! Oh, stop it! Ha, ha, ha! You tickle my stomach." This time poor Mastro Cherry fell as if shot. When he opened his eyes, he found himself sitting on the floor. His face had changed; fright had turned even the tip of his nose from red to deepest purple.
1
answers
0
votes
5
views
asked 3 months ago

Create ECS service using existing load balancer with existing target group

I'm using the AWS console to create an ECS service (using fargate) in an existing cluster. In the second step of the wizard (configure network) I choose an existing application load balancer. The "container to load-balance" section shows my container to add. When I click "add to load balancer" it initially shows the "production listener port" and "target group name" dropdowns showing "create new". When I select an existing target group in the dropdown this grays out (disables) the "production listener port" dropdown. When clicking the "next step" button validation complains the "production listener port" is not filled in (validation message: "please select a listener"). Which isn't possible because the control is disabled. First selecting a listener port in the wizard and switching to an existing target group after that doesn't remedy the situation as choosing an existing target group blanks out and disables the "production listener port" dropdown causing the same problem when clicking the "next step" button. How to get the container to register in an existing target group? **Update** The target group is an empty IP address group. Interestingly it does work if the target group is not empty (the "production listener port" is then filled with 443:HTTPS) but an empty target group (even of the correct type) clears the "production listener port". **Reproduction** 1. Create a new target group of type "IP address" leaving other default settings. Do not register any targets in this group. 2. Next add this target group to a (new or existing) load balancer (for testing I added the group to an existing load balancer with a single source IP address filter e.g. 192.168.1.1/32 so it doesn't disrupt normal operations). 3. When creating a new ECS service select a task definition that has a container with an exposed port 80. 4. On the second screen "Configure Network" choose the VPC and subnets and under "Load balancing" pick "Application load balancer". 5. Select the load balancer to which you added the empty target group. 6. Now click "Add to load balancer" next to the container. This shows the "Production listener port" and "Target group name" dropdowns both initially set to "Create new". 7. Choosing the empty target group disables the "Production listener port" dropdown and clears it. 8. If the target group is not empty the "Production listener port" is automatically filled with the correct port.
1
answers
1
votes
25
views
asked 3 months ago

Can't set 'access_logs.s3.bucket' back to 'false' for ALB using CloudFormation

I'm trying to turn on ALB access logs conditionally using CloudFormation as follows: ``` LoadBalancer: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ['-', [!Ref 'AWS::StackName', ALB]] Type: application IpAddressType: !If [DoEnableIPv6Support, dualstack, ipv4] Scheme: internet-facing Subnets: - !Ref PublicSubnet1 - !Ref PublicSubnet2 - !Ref PublicSubnet3 - !Ref PublicSubnet4 - !Ref PublicSubnet5 - !Ref PublicSubnet6 SecurityGroups: - !Ref ALBSecurityGroup LoadBalancerAttributes: - Key: access_logs.s3.enabled Value: !If [EnableLoadBalancerAccessLogs, "true", "false"] - Key: access_logs.s3.bucket Value: !If [EnableLoadBalancerAccessLogs, !Ref AccessLogsBucket, ""] - Key: access_logs.s3.prefix Value: !If [EnableLoadBalancerAccessLogs, !Sub "${AWS::StackName}-ALB", ""] Tags: - Key: 'Stack' Value: !Ref 'AWS::StackName' ``` However, when `EnableLoadBalancerAccessLogs` is false, I'm running into this error: ``` The value of 'access_logs.s3.bucket' cannot be empty (Service: AmazonElasticLoadBalancing; Status Code: 400; Error Code: ValidationError; Request ID: REDACTED; Proxy: null) ``` The condition `EnableLoadBalancerAccessLogs` was defined as follows: ``` EnableLoadBalancerAccessLogs: !Not [!Equals [AccessLogsBucket, ""]] ``` I've also tried some potential workarounds for `LoadBalancerAttributes`, like ``` LoadBalancerAttributes: - Key: access_logs.s3.enabled Value: !If [EnableLoadBalancerAccessLogs, "true", "false"] - !If - EnableLoadBalancerAccessLogs - Key: access_logs.s3.bucket Value: !Ref AccessLogsBucket - !Ref AWS::NoValue - !If - EnableLoadBalancerAccessLogs - Key: access_logs.s3.prefix Value: !Sub "${AWS::StackName}-ALB" - !Ref AWS::NoValue ``` or ``` LoadBalancerAttributes: !If - EnableLoadBalancerAccessLogs - - Key: access_logs.s3.enabled Value: "true" - Key: access_logs.s3.bucket Value: !Ref AccessLogsBucket - Key: access_logs.s3.prefix Value: !Sub "${AWS::StackName}-ALB" - !Ref AWS::NoValue ``` but none of them worked. [CloudFormation docs](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticloadbalancingv2-loadbalancer-loadbalancerattributes.html) says > access_logs.s3.bucket - The name of the S3 bucket for the access logs. This attribute is required if access logs are enabled. The bucket must exist in the same region as the load balancer and have a bucket policy that grants Elastic Load Balancing permissions to write to the bucket. Is this a bug in AWS API?
1
answers
0
votes
25
views
asked 4 months ago

AWS Zone Apex challenge with older DNS server

The University that I work for has its own DNS servers. They are older and need an IP address to point to for the zone apex record. DNS migration is not an option. We have a site in AWS Amplify. We want to use the Amplify website for our root domain, "example.edu". RFC 1034 says that the zone apex must be an A Record, and not a CNAME. According to the article at https://aws.amazon.com/blogs/networking-and-content-delivery/solving-dns-zone-apex-challenges-with-third-party-dns-providers-using-aws/, there are three options: Route53, Elastic IPs with EC2 instances, and Global Accelerator. Since we are using AWS Amplify, we can't do the EC2 option. The Route53 option won't work with our old DNS server, which only works with IP addresses. The third option is to use AWS Global Accelerator and an Application Load Balancer (ALB) which does a 301 redirect to our Cloudfront distribution that has the custom SSL cert for our Amplify instance. When we point our DNS at the IP associated with AWS Global Accelerator, the redirect is working, but the URL is showing the Cloudfront distribution instead of example.com. I was told that whitelisting the Host header would fix this, but it just returns a 403 error saying that the request could not be satisfied. I am not sure if I am on the right track and need some adjustment somewhere, or if I need to do something completely different.
2
answers
0
votes
5
views
asked 4 months ago

AWS NLB sends 120 byte of unknown TCP Keep-alive packet to clients

We have an IoT device that connects to our MQTT broker behind the NLB. We are keeping the connection between IoT device and broker by utilising MQTT Keep Alive time and brokers heartbeat intervals. Our IoT device sleeps most of the time. It wakes up in the following situations. Whenever it wants to send PINQREST(every 340s -MQTT Keep Alive time) sends it to the broker. Other microservices publish some data, and brokers send that information to IoT devices. Our objective is to sleep the IoT device as much as possible and maintain the connection to save the battery. ***Problem: ***Normally, this particular IoT device sleeps most of the time. Our objective is to keep it sleeping as much as possible while maintaining a connection between IoT Device and the MQTT broker. The problem is that IoT Device continuously wakes up every 20s whenever the broker sends some downstream data to the IoT device. This usually happens whenever IoT Device receives downstream data from a broker. Based on our vendor's packet analysis, we found that NLB sends 120 bytes of TCP Keep-alive packets to IoT devices every 20s right after the broker publishes some downstream data. This is entirely sent by NLB and not by the broker. ***Only happen in TLS : ***We found that this happens if we use TLS(8883) in NLB and terminate the TLS in NLB. If we remove the TLS, add the listener on a non-secure port (1883), and forward the traffic to Target's non-secure port, things are working as expected, and there are no 20s wake-up or keep-alive packet sent by NLB every 20s. We also tested the same setup with CLB in an SSL port. It works without any problem and does not send a keep-alive to the client (IoT device). We have removed the TLS and opened the non-secure port as a temporary workaround. Why does NLB send keep-alive packets every 20s if we use TLS ? is this an intended behaviour of NLB? Any idea how we could resolve it? ***The overview of the cloud setup: *** * MQTT broker runs in ECS Fargate * Multi-AZ * Broker in a private subnet NLB is in between * Client (IoT device) and Target(MQTT Broker) * NLB idle time keep resetting by two things * Keep alive time sent by Client(IoT device) every * 340s Heartbeat time published by Target (MQTT Broker)every 340s Connection remains open NLB offload the TLS in port 8883 and forward the traffic to target port 1883
1
answers
0
votes
33
views
asked 4 months ago
  • 1
  • 90 / page