New pods are configured with wrong DNS server

0

Hi there,

when deploying new pods in my EKS cluster, the DNS server specified in /etc/resolv.conf is not the one configured in "service/kube-dns" in the kube-system namespace. See here:

$ kubectl -n kube-system get svc kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 42d

But this pod receives a different DNS server:

$ kubectl exec -it busybox -- cat /etc/resolv.conf
nameserver 10.100.0.10
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5

I've tried to debug this issue but so far no luck. Any help or hints would be greatly appreciated!

Regards,
Khaled

質問済み 3年前1010ビュー
2回答
0

I've had three worker nodes in my EKS cluster, one of them was started by the AWS auto scaler. However, the kubelet was started with the parameter

...
"clusterDNS": [
"10.100.0.10"
],
...

when it should have been started with the parameter

...
"clusterDNS": [
"172.20.0.10"
],
...

The worker node with the wrong parameter also was the one node with the fewest pods scheduled to it so that each new Pod was scheduled on it.

In order to solve the issue, I shut down the node with the incorrect parameter. After shutting down the node, the AWS auto scaler started a new worker node. This new worker node had the correct parameter and the problem I observed was solved.

回答済み 3年前
0
回答済み 3年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ