New pods are configured with wrong DNS server

0

Hi there,

when deploying new pods in my EKS cluster, the DNS server specified in /etc/resolv.conf is not the one configured in "service/kube-dns" in the kube-system namespace. See here:

$ kubectl -n kube-system get svc kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 42d

But this pod receives a different DNS server:

$ kubectl exec -it busybox -- cat /etc/resolv.conf
nameserver 10.100.0.10
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5

I've tried to debug this issue but so far no luck. Any help or hints would be greatly appreciated!

Regards,
Khaled

已提问 3 年前1076 查看次数
2 回答
0

I've had three worker nodes in my EKS cluster, one of them was started by the AWS auto scaler. However, the kubelet was started with the parameter

...
"clusterDNS": [
"10.100.0.10"
],
...

when it should have been started with the parameter

...
"clusterDNS": [
"172.20.0.10"
],
...

The worker node with the wrong parameter also was the one node with the fewest pods scheduled to it so that each new Pod was scheduled on it.

In order to solve the issue, I shut down the node with the incorrect parameter. After shutting down the node, the AWS auto scaler started a new worker node. This new worker node had the correct parameter and the problem I observed was solved.

已回答 3 年前
0
已回答 3 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则