Get Hands-on with Amazon EKS - Workshop Event Series
Whether you're taking your first steps with Kubernetes or you're an experienced practitioner looking to sharpen your skills, our Amazon EKS workshop series delivers practical, real-world experience that moves you forward. Learn directly from AWS solutions architects and EKS specialists through hands-on sessions designed to build your confidence with Kubernetes. Register now and start building with Amazon EKS!
How do I install NodeLocalDNS in my EKS cluster and troubleshoot issues?
I want to install NodeLocalDNS in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster and troubleshoot issues.
Resolution
Prerequisites:
- Verify that you configured kubectl to access your Amazon EKS cluster.
- Confirm that CoreDNS is running in your cluster.
Install NodeLocalDNS
NodeLocalDNS uses a link-local IP address to provide DNS caching on each node. Link-local addresses are IP addresses in the 169.254.0.0/16 range that are valid only within the network segment.
Note: The standard link-local IP address for NodeLocalDNS is 169.254.20.10. Don't change this value unless you have a specific conflict in your environment.
To install NodeLocalDNS in your Amazon EKS cluster, complete the following steps:
-
Run the following curl command to download the NodeLocalDNS manifest from the Kubernetes repository:
curl -Lo nodelocaldns.yaml.template https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml -
To retrieve your cluster's kube-dns service IP address, run the following command:
kubectl get svc kube-dns -n kube-system -o jsonpath='{.spec.clusterIP}'Note: Note the IP address from the output.
-
To retrieve your cluster's domain, run the following command:
kubectl get configmap coredns -n kube-system -o yaml | grep 'kubernetes'The output shows your cluster domain.
-
To determine your kube-proxy mode, run the following command:
kubectl get configmap kube-proxy-config -n kube-system -o yaml | grep modeThe output shows either iptables or ipvs as the mode value.
-
Edit the NodeLocalDNS manifest based on your kube-proxy mode.
For iptables mode, edit the manifest with the following commands:
# Set environment variables for node-local-dns kubedns=$(kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}) domain=cluster.local localdns=`169.254.20.10` # Update the manifest with your cluster's specific values: sed "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml.template > nodelocaldns.yamlNote: Replace cluster.local with your cluster domain from step 3 if your cluster uses a different domain. Replace 169.254.20.10 with your cluster's link-local IP address if different.
For IPVS mode, edit the manifest with the following commands:
# Set environment variables for node-local-dns kubedns=$(kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}) domain=cluster.local localdns=169.254.20.10 # Update the manifest with your cluster's specific values: sed "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/,__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml.template > nodelocaldns.yamlNote: Replace cluster.local with your cluster domain from step 3 if your cluster uses a different domain. Replace 169.254.20.10 with your cluster's link-local IP address if different. In IPVS mode, you need to configure pods to use NodeLocalDNS. See the "Configure pods to use NodeLocalDNS (IPVS mode only)" section.
-
To apply the NodeLocalDNS manifest, run the following command:
kubectl apply -f nodelocaldns.yaml -
To verify that NodeLocalDNS pods are in a Running status, run the following command:
kubectl get pods -n kube-system -l k8s-app=node-local-dns
Configure pods to use NodeLocalDNS (IPVS mode only)
Important: This section applies only if your cluster uses IPVS mode for kube-proxy. If your cluster uses iptables mode, then DNS traffic is automatically redirected to NodeLocalDNS.
For IPVS mode clusters, you must manually configure pods to use NodeLocalDNS. To direct your pods to NodeLocalDNS, you can configure individual pods or configure clusters with kubelet.
Configure individual pods
To configure individual pods, add the following configuration to your pod specification:
spec: dnsPolicy: "None" dnsConfig: nameservers: - [169.254.20.10] searches: - default.svc.cluster.local - svc.cluster.local - cluster.local options: - name: ndots value: "5"
Note: Replace 169.254.20.10 with your cluster's link-local IP address. Replace cluster.local with your cluster domain if your cluster uses a different domain.
Configure clusters with kubelet
In IPVS mode, NodeLocalDNS only listens on the link-local address 169.254.20.10. You must modify the kubelet configuration to point to this link-local address.
To configure all pods in your cluster to use NodeLocalDNS and modify the kubelet configuration on each node, complete the following steps:
-
Edit the kubelet configuration file. Add the following configuration:
{ "clusterDNS": ["169.254.20.10"], "clusterDomain": "cluster.local" }Note: Replace 169.254.20.10 with your cluster's link-local IP address. Replace cluster.local with your cluster domain if your cluster uses a different domain.
-
To restart the kubelet service on each node, run the following sudo command:
sudo systemctl restart kubelet -
To verify the kubelet configuration, run the following command:
cat /etc/kubernetes/kubelet/config.json | grep clusterDNSNote: The output shows 169.254.20.10 as the NodeLocalDNS link-local IP address.
Note: For automated deployment, you can modify kubelet configuration in the userdata section of your launch template. For more information, see How do I use custom user data with AL2023 Amazon EKS nodes?
Verify that you correctly installed NodeLocalDNS
To verify that NodeLocalDNS is correctly working, complete the following steps:
-
To create a test pod, run the following command:
kubectl run test-dns --image=busybox:1.28 --restart=Never --rm -it -- nslookup kubernetes.default -
If NodeLocalDNS pods have a CrashLoopBackOff status with errors that indicate port 53 is already in use. Then review the output to confirm that the DNS query was correctly resolved.
-
To verify that DNS queries are using NodeLocalDNS, run the following command:
kubectl logs -n kube-system -l k8s-app=node-local-dns --tail=50The logs show DNS queries being processed by NodeLocalDNS.
Note: By default, logging isn't activated in NodeLocalDNS. To view the log, activate logging in the node-local-dns ConfigMap.
# kubectl edit configmaps -n kube-system node-local-dns apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system data: Corefile: | cluster.local:53 { log # Enable logging errors cache 30 ... }
Troubleshoot NodeLocalDNS issues
Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshooting errors for the AWS CLI. Also, make sure that you're using the most recent AWS CLI version.
Resolve CrashLoopBackOff errors caused by port conflicts
If NodeLocalDNS pods enter CrashLoopBackOff with port 53 errors, then this occurs because Amazon EKS auto mode nodes reserve port 53. To resolve this issue, choose one of the following methods:
Method 1: Add node affinity rules to exclude auto mode nodes
Complete the following steps:
-
Edit the NodeLocalDNS daemonset:
kubectl edit daemonset node-local-dns -n kube-system -
Add the following affinity configuration under spec.template.spec:
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: eks.amazonaws.com/compute-type operator: NotIn values: - auto -
Save the changes, and then verify that the pods restart successfully.
Method 2: Modify the NodeLocalDNS ConfigMap to change the health check port
Complete the following steps:
-
To view the current NodeLocalDNS ConfigMap, run the following command:
kubectl get configmap node-local-dns -n kube-system -o yaml -
To edit the ConfigMap to change the health check port, run the following command:
kubectl edit configmap node-local-dns -n kube-system -
In the ConfigMap, locate the health plugin configuration and modify the port. The ConfigMap structure looks like this:
apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system data: Corefile: | cluster.local:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind IP_ADDRESS forward. __PILLAR__CLUSTER__DNS__ { force_tcp } prometheus :9253 health 169.254.20.10:8081 }Note: Replace the port in the health line from 8081 to your desired port. Replace 169.254.20.10 with your cluster's link-local IP address if different.
-
Save the changes.
-
To restart the NodeLocalDNS pods, run the following command:
kubectl rollout restart daemonset node-local-dns -n kube-system -
To update the daemonset health check configuration to match the new port, run the following command:
kubectl edit daemonset node-local-dns -n kube-system -
Locate the livenessProbe and readinessProbe sections and update the port to match your new health check port.
Resolve DNS query timeout errors
If your pods experience DNS query timeouts, verify that your security groups and network ACL allow TCP and UDP traffic on port 53. This traffic is required for pod-to-pod communication.
Complete the following steps:
-
To identify the security groups attached to your nodes, run the following describe-instances AWS CLI command:
aws ec2 describe-instances --filters "Name=tag:eks:cluster-name,Values=YOUR-CLUSTER-NAME" --query "Reservations[*].Instances[*].SecurityGroups[*].[GroupId,GroupName]" --output tableNote: Replace YOUR-CLUSTER-NAME with your Amazon EKS cluster name.
-
Open the Amazon EC2 console.
-
In the navigation pane, choose Security Groups.
-
Select the security group identified in step 1 of this section.
-
Choose the Inbound rules tab.
-
Verify that rules exist to allow TCP and UDP traffic on port 53 from the pod CIDR range.
Note: If the rules don't exist, then choose Edit inbound rules. Then, add rules for TCP and UDP on port 53 with the source set to your pod CIDR range.
Resolve "Connection refused" error messages
If you get "Connection refused" error messages when pods attempt to use NodeLocalDNS, then verify that at least one CoreDNS pod is running.
To check CoreDNS pod status, run the following command:
kubectl get pods -n kube-system -l k8s-app=kube-dns
If no CoreDNS pods are running or if pods are in a failed state, then restart the CoreDNS deployment. To restart deployment, run the following command:
kubectl rollout restart deployment coredns -n kube-system
To verify that CoreDNS pods are running, run the following command:
kubectl get pods -n kube-system -l k8s-app=kube-dns -w
After CoreDNS pods are running, to restart the NodeLocalDNS daemonset, run the following command:
kubectl rollout restart daemonset node-local-dns -n kube-system
Related information
Using CoreDNS for Service Discovery on the Kubernetes website
Run kube-proxy in IPVS Mode on the Kubernetes website
- Topics
- Containers
- Language
- English

Relevant content
- asked a year ago
- Accepted Answerasked a year ago
AWS OFFICIALUpdated 5 months ago