Unable to reach application hosted in EKS worker node by secondary IP from a Window server hosted in another VPC

0

We have 2 VPCs, VPC A and B. They have connected with a PCX.

In VPC A there is a EKS cluster, I have a K8s service with the following endpoint:

me$ kubectl get endpoints | grep my-app
my-app                           10.0.102.250:80                   2y200d

And this service hosts in a EKS worker node EC2 instance with 4 ENIs, so there are 4 private IPs. One of it is 10.0.100.200 for example.

In VPC A, when I curl from a EC2 to 10.0.102.250:80 it responds properly and I can ping to the EC2 by the host private IP 10.0.100.200.

However when I curl from a EC2 in VPC B, I can only ping to 10.0.100.200 with correct response, but I cannot access the K8s service with 10.0.102.250:80. The K8s worker node has whitelisted all traffic from VPC B's EC2.

When I run tracert 10.0.102.250 from the VPC B EC2, it shows the followings.

C:\Users\Administrator>tracert 10.0.102.250
Tracing route to ip-10-0-102-240.us-west-2.compute.internal [10.0.102.250]

Is it because the DNS in VPC A passed to VPC B via PCX contains information for EC2 with ENI attached, those secondary private IPs associated were excluded making VPC B's EC2 not sure how to forward request to?

For the PCX we forward VPC A's CIDR to PCX, it should be correct(10.0.0.0/16)

asked 2 years ago302 views
1 Answer
0

It's possible that the issue is related to DNS resolution or routing. When you try to access the K8s service from a EC2 instance in VPC B using the endpoint 10.0.102.250:80, the request is being sent to the IP address, but it's not reaching the service.

Since you can ping the worker node's IP address 10.0.100.200 from the EC2 instance in VPC B, it seems that routing between the VPCs is working correctly. However, if you're not able to access the K8s service, it could be due to a DNS resolution issue.

When you run tracert 10.0.102.250 from the EC2 instance in VPC B, it shows the hostname ip-10-0-102-240.us-west-2.compute.internal. This hostname is likely being resolved by a DNS server in VPC A, and it's possible that the DNS resolution is not working correctly from VPC B.

To troubleshoot the issue, you could try to manually resolve the hostname ip-10-0-102-240.us-west-2.compute.internal to the IP address 10.0.102.250 from the EC2 instance in VPC B using the nslookup command. If the DNS resolution is not working correctly, you could try to update the DNS configuration in VPC B to use the same DNS server as VPC A or configure a DNS resolver that can resolve the hostname correctly.

Additionally, you could try to access the K8s service using the IP address 10.0.100.200 instead of the endpoint 10.0.102.250:80 to see if it works. If it does, then the issue is likely related to DNS resolution, and you may need to update the DNS configuration in VPC B.

Ref links: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-troubleshooting https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html#service-load-balancer-issues

profile pictureAWS
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions