Unable to reach application running inside a pod on eks

0

Hi, With my data plane running inside a private VPC and nodes inside a private subnet, have deployed frontend and backend applications to the pods of type NodePort.

All the services and pods look healthy, and I am able to exec into the pod and with a curl on localhost healthcheck endpoint it gives proper response, which means the applications are running fine. However the healthcheck doesn't succeed from the loadbalancer, nor the curl from outside of the pods work. It says unable to reach. All the port mappings look good, could somebody point me where the issue is!!

$ kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default pod/frtb-labs-admback-6694c556cb-phg79 1/1 Running 0 23d default pod/frtb-labs-admfront-86848d6bb6-fxv6c 1/1 Running 0 4h10m default pod/frtb-labs-appback-79d4b94497-tqgr2 1/1 Running 0 23d default pod/frtb-labs-appfront-5b94f9fcdc-g77wv 1/1 Running 0 4h11m default pod/troubleshoot 1/1 Running 0 14d kube-system pod/aws-load-balancer-controller-57c7b89bf4-69zcn 1/1 Running 0 38d kube-system pod/aws-load-balancer-controller-57c7b89bf4-stcjt 1/1 Running 0 38d kube-system pod/aws-node-lf482 1/1 Running 0 73d kube-system pod/aws-node-rgd96 1/1 Running 0 73d kube-system pod/coredns-6bc4667bcc-cjbc5 1/1 Running 0 73d kube-system pod/coredns-6bc4667bcc-qm4v2 1/1 Running 0 73d kube-system pod/kube-proxy-6c55b 1/1 Running 0 73d kube-system pod/kube-proxy-lzkq2 1/1 Running 0 73d

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/frtb-labs-admback NodePort 172.20.80.72 <none> 80:30863/TCP 23d default service/frtb-labs-admfront NodePort 172.20.63.215 <none> 80:31350/TCP 23d default service/frtb-labs-appback NodePort 172.20.73.199 <none> 80:31187/TCP 23d default service/frtb-labs-appfront NodePort 172.20.107.0 <none> 80:31593/TCP 23d default service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 73d kube-system service/aws-load-balancer-webhook-service ClusterIP 172.20.53.174 <none> 443/TCP 38d kube-system service/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 73d

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/aws-node 2 2 2 2 2 <none> 73d kube-system daemonset.apps/kube-proxy 2 2 2 2 2 <none> 73d

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default deployment.apps/frtb-labs-admback 1/1 1 1 23d default deployment.apps/frtb-labs-admfront 1/1 1 1 23d default deployment.apps/frtb-labs-appback 1/1 1 1 23d default deployment.apps/frtb-labs-appfront 1/1 1 1 23d kube-system deployment.apps/aws-load-balancer-controller 2/2 2 2 38d kube-system deployment.apps/coredns 2/2 2 2 73d

NAMESPACE NAME DESIRED CURRENT READY AGE default replicaset.apps/frtb-labs-admback-6694c556cb 1 1 1 23d default replicaset.apps/frtb-labs-admfront-68b6447f79 0 0 0 23d default replicaset.apps/frtb-labs-admfront-86848d6bb6 1 1 1 4h10m default replicaset.apps/frtb-labs-appback-79d4b94497 1 1 1 23d default replicaset.apps/frtb-labs-appfront-59dcc8f46 0 0 0 23d default replicaset.apps/frtb-labs-appfront-5b94f9fcdc 1 1 1 4h11m kube-system replicaset.apps/aws-load-balancer-controller-57c7b89bf4 2 2 2 38d kube-system replicaset.apps/coredns-6bc4667bcc 2 2 2 73d

Gov
asked 7 months ago336 views
1 Answer
0

Hello, Greetings!

  1. To understand the health check issues with the load balancer in your Amazon EKS, the below pointers could be considered for troubleshooting as the health check on the load balancer could fail due to multiple/any of the below reason.

Check the status of the pod

Check the pod and service label selectors

Check for missing endpoints

Check the service traffic policy and cluster security groups for Application Load Balancers

Verify that your EKS is configured for targetPort

Verify that your AWS Load Balancer Controller has the correct permissions

Check the ingress annotations for issues with Application Load Balancers

Check the Kubernetes Service annotations for issues with Network Load Balancers

Manually test a health check

Check the networking

Restart the kube-proxy

  1. The detailed resolution for the above provided pointers could be found in this document [1]

  2. Also Unhealthy targets/ failed healthchecks in the Application Load Balancer target groups happen for two reasons. Either the service traffic policy, spec.externalTrafficPolicy, is set to Local instead of Cluster. Or, the node groups in a cluster have different cluster security groups associated with them, and traffic cannot flow freely between the node groups.

    > Verify that the traffic policy is correctly configured:
       $ kubectl get svc SERVICE_NAME -n YOUR_NAMESPACE -o=jsonpath='{.spec.externalTrafficPolicy}{"\n"}'
    
       Example output:
       Local
    
    > Change the setting to Cluster:
       $ kubectl edit svc SERVICE_NAME -n YOUR_NAMESPACE
    

In most cases, performing the above checks helps to identify the issue.

You can also follow the steps in this documentation[1] to troubleshoot the above mentioned pointers.

Thank you!

References : [1] https://repost.aws/knowledge-center/eks-resolve-failed-health-check-alb-nlb

AWS
answered 7 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions