New pods are configured with wrong DNS server

0

Hi there,

when deploying new pods in my EKS cluster, the DNS server specified in /etc/resolv.conf is not the one configured in "service/kube-dns" in the kube-system namespace. See here:

$ kubectl -n kube-system get svc kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 42d

But this pod receives a different DNS server:

$ kubectl exec -it busybox -- cat /etc/resolv.conf
nameserver 10.100.0.10
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5

I've tried to debug this issue but so far no luck. Any help or hints would be greatly appreciated!

Regards,
Khaled

demandé il y a 3 ans1076 vues
2 réponses
0

I've had three worker nodes in my EKS cluster, one of them was started by the AWS auto scaler. However, the kubelet was started with the parameter

...
"clusterDNS": [
"10.100.0.10"
],
...

when it should have been started with the parameter

...
"clusterDNS": [
"172.20.0.10"
],
...

The worker node with the wrong parameter also was the one node with the fewest pods scheduled to it so that each new Pod was scheduled on it.

In order to solve the issue, I shut down the node with the incorrect parameter. After shutting down the node, the AWS auto scaler started a new worker node. This new worker node had the correct parameter and the problem I observed was solved.

répondu il y a 3 ans
0
répondu il y a 3 ans

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions