EKS Auto - can't connect to my load balancer ingress but can connect to individual pods? How to setup subnets?

0

I've started up EKS Auto, then applied a yaml with a namespace fib which created pods running a basic web server calculating fibonacci on port 5000, with a HorizontalPodAutoscaler, & a LoadBalancer. Possibly specifying the HorizontalPodAutoscaler & LoadBalancer here is too much for Auto, but it seems like a reasonably simple server.

My cluster has spun up & has a load balancer. I got some details with kubectl describe services/fib -n fib:

Type:                     LoadBalancer
IP Families:              IPv4
IP:                       10.100.124.170 (whatever)
IPs:                      same-as-above
LoadBalancer Ingress:     some-fib-endpoint.elb.region.amazonaws.com
Port:                     tcp-port  5000/TCP
TargetPort:               5000/TCP

which seems good. When I kubectl exec into a pod, I can then run & get a response from the ingress point: curl some-fib-endpoint.elb.region.amazonaws.com:5000/calculate gives the expected response, as does kubectl port-forward pod/pod-name -n fib 5000 -> curl localhost:5000/calculate. However, curl some-fib-endpoint.elb.region.amazonaws.com:5000/calculate fails with "Failed to connect endpoints... : Could not connect to the server".

I followed this guide which was useful: https://repost.aws/knowledge-center/eks-load-balancers-troubleshooting Some of the troubleshooting tips recommended there which I've looked into:

  • On the networking tab in AWS console, my Cluster is public, with public & private access + a "0.0.0.0/0 (open to all traffic)" allowlist.

Then I looked into the subnets, and here I think I've found a problem. My cluster has 2 subnets attached, but they both seem to be private, with the tag kubernetes.io/role/internal-elb = 1 & seem to be using nat gateway. I think I need a public subnet with different tags as per: https://docs.aws.amazon.com/eks/latest/userguide/tag-subnets-auto.html But how do I set that up? So far I've just run some simple commands:

  • eksctl create cluster --enable-auto-mode=True...
  • eksctl utils update-cluster-vpc-config --cluster=my-cluster --region=my-region --private-access=true --public-access=true --approve

Should I have passed a different flag to create command? Do I need to create more subnets, or should I swap the existing subnets to be public instead of private? If I have existing subnets, can I attach EKS Auto to them, or are the ones I have fine? I'm happy to delete my cluster & start over.

Update: I've looked back into the subnets created, and I see that running eksctl create cluster --enable-auto-mode=True created 4 subnets with names like: eksctl-myname-cluster/SubnetPublicREGION1A eksctl-myname-cluster/SubnetPublicREGION1B eksctl-myname-cluster/SubnetPrivateREGION1A eksctl-myname-cluster/SubnetPrivateREGION1B

Looking at the networking tab, I only see the 2 private ones linked, but this might be a visual bug. The linked VPC has all 4 subnets (2 private & 2 public) in it.

asked 19 days ago62 views
2 Answers
0

The issue is with the subnets your LoadBalancer is using. It was created in private subnets (with the tag kubernetes.io/role/internal-elb=1), so it can’t receive external traffic.

You need at least one public subnet, with the following tags:

kubernetes.io/role/elb=1

kubernetes.io/cluster/<your-cluster-name>=owned

This subnet also needs a route to an Internet Gateway.

Since you used eksctl create cluster --enable-auto-mode, it likely only created private subnets. If you can recreate the cluster, the best option is to use a YAML config file with properly defined public and private subnets. Or you can manually create a public subnet and associate it with the ELB.

There’s an example here: https://eksctl.io/usage/vpc-networking/#customize-vpc-subnets

profile picture
answered 18 days ago
  • I updated my question - it looks like eksctl create cluster creates 4 subents - 2 public & 2 private. I've checked those & the tags look good.

0

The issue you're experiencing is likely related to the subnet configuration for your EKS Auto cluster. It appears that your cluster is currently using only private subnets, which is preventing external access to your load balancer ingress.

To resolve this, you need to ensure that your EKS cluster has both public and private subnets. Here's what you should do:

  1. Create or identify public subnets in your VPC. These subnets should have the following tags:

    • kubernetes.io/role/elb = 1
    • kubernetes.io/cluster/your-cluster-name = shared
  2. Keep your existing private subnets, but ensure they have these tags:

    • kubernetes.io/role/internal-elb = 1
    • kubernetes.io/cluster/your-cluster-name = shared
  3. When creating your EKS Auto cluster, specify both public and private subnets. You can do this by modifying your cluster creation command:

    eksctl create cluster --enable-auto-mode=True --vpc-public-subnets=subnet-id1,subnet-id2 --vpc-private-subnets=subnet-id3,subnet-id4 ...

    Replace subnet-id1, subnet-id2, etc. with your actual subnet IDs.

  4. If you've already created the cluster, you can update its VPC configuration:

    eksctl utils update-cluster-vpc-config --cluster=my-cluster --region=my-region --vpc-public-subnets=subnet-id1,subnet-id2 --vpc-private-subnets=subnet-id3,subnet-id4 --approve

  5. After updating the subnet configuration, you may need to recreate your load balancer to ensure it's properly configured with the new subnet settings.

By setting up both public and private subnets correctly, your load balancer should be able to receive external traffic and route it to your pods. The public subnets will allow the load balancer to have a public-facing endpoint, while the private subnets will host your worker nodes and pods.

Remember to ensure that your security groups are properly configured to allow the necessary inbound traffic to your load balancer and from the load balancer to your pods.

If you're still having issues after making these changes, you may want to check the load balancer controller logs for any specific errors or configuration issues that might be preventing proper target registration or traffic routing.
Sources
The ingress object doesn't creates an ALB to direct traffic to the Kubernetes pods | AWS re:Post
EKS AutoMode - LoadBalancer Functionality | AWS re:Post
Learn about VPC Networking and Load Balancing in EKS Auto Mode - Amazon EKS
Targets are not created in Target Groups | AWS re:Post

profile picture
answered 19 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions