How do I troubleshoot DNS failures with Amazon EKS?
The applications or pods that are using CoreDNS in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster fail internal or external DNS name resolutions.
Short description
Pods that run inside the Amazon EKS cluster use the CoreDNS service's cluster IP as the default name server for querying internal and external DNS records. If there are issues with the CoreDNS pods, service configuration, or connectivity, then applications can fail DNS resolutions.
The CoreDNS pods are abstracted by a service object called kube-dns. To troubleshoot issues with your CoreDNS pods, verify that all the components of the kube-dns service are working. These components include, but are not limited to, service endpoint options and iptables rules.
Resolution
The following resolution applies to the CoreDNS ClusterIP 10.100.0.10.
1. Get the ClusterIP of your CoreDNS service:
kubectl get service kube-dns -n kube-system
2. Verify that the DNS endpoints are exposed and pointing to CoreDNS pods:
kubectl -n kube-system get endpoints kube-dns
Output:
NAME ENDPOINTS AGE kube-dns 192.168.2.218:53,192.168.3.117:53,192.168.2.218:53 + 1 more... 90d
Note: If the endpoint list is empty, then check the pod status of the CoreDNS pods.
3. Verify that a security group or network access control list (network ACL) aren't blocking the pods when they communicate with CoreDNS.
For more information, see Why won't my pods connect to other pods in Amazon EKS?
Verify that the kube-proxy pod is working
Check your logs for timeout errors to the control plane to verify that the kube-proxy pod has access to API servers for your cluster. Also, check for 403 unauthorized errors.
Get the kube-proxy logs:
kubectl logs -n kube-system --selector 'k8s-app=kube-proxy'
Note: The kube-proxy gets the endpoints from the control plane and creates the iptables rules on every node.
Connect to the application pod to troubleshoot the DNS issue
1. To run commands inside your application pods, run the following command to access a shell inside the running pod:
$ kubectl exec -it your-pod-name -- sh
If the application pod doesn't have an available shell binary, then you receive an error similar to the following:
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown command terminated with exit code 126
To debug, update the image that's used in your manifest file for another image, such as the busybox image (from the Docker website).
2. To verify that the cluster IP of the kube-dns service is in your pod's /etc/resolv.conf, run the following command in the shell inside of the pod:
cat /etc/resolv.conf
The following example resolv.conf shows a pod that's configured to point to 10.100.0.10 for DNS requests. The IP must match the ClusterIP of your kube-dns service:
nameserver 10.100.0.10 search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal options ndots:5
Note: You can manage your pod's DNS configuration with the dnsPolicy field in the pod specification. If this field isn't populated, then the ClusterFirst DNS policy is used by default.
3. To verify that your pod can use the default ClusterIP to resolve an internal domain, run the following command in the shell inside the pod:
nslookup kubernetes.default 10.100.0.10
Output:
Server: 10.100.0.10 Address: 10.100.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.100.0.1
4. To verify that your pod can use the default ClusterIP to resolve an external domain, run the following command in the shell inside the pod:
nslookup amazon.com 10.100.0.10
Output:
Server: 10.100.0.10 Address: 10.100.0.10#53 Non-authoritative answer: Name: amazon.com Address: 176.32.98.166 Name: amazon.com Address: 205.251.242.103 Name: amazon.com Address: 176.32.103.205
5. To verify that your pod can use the IP address of the CoreDNS pod to resolve directly, run the following commands in the shell inside the pod:
nslookup kubernetes COREDNS_POD_IP nslookup amazon.com COREDNS_POD_IP
Note: Replace the COREDNS_POD_IP with one of the endpoint IPs from the kubectl get endpoints that you used earlier.
Get more detailed logs from CoreDNS pods for debugging
1. Turn on the debug log of CoreDNS pods and add the log plugin to the CoreDNS ConfigMap:
kubectl -n kube-system edit configmap coredns
2. In the editor screen that appears in the output, add the log string. For example:
kind: ConfigMap apiVersion: v1 data: Corefile: | .:53 { log # Enabling CoreDNS Logging errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } ... ...
Note: Reloading the configuration CoreDNS takes several minutes. To apply the changes immediately, you can restart the pods one by one.
3. Check if the CoreDNS logs are failing or getting any hits from the application pod:
kubectl logs --follow -n kube-system --selector 'k8s-app=kube-dns'
Update the ndots value
DNS uses nameserver for name resolutions, which is usually the ClusterIP of a kube-dns service. DNS uses search for completing a query name to a fully qualified domain name. The ndots value is the number of dots that must appear in a name to resolve a query before an initial absolute query is made.
For example, you can set the ndots option to the default value 5 in a domain name that's not fully qualified. Then, all external domains that don't fall under the internal domain cluster.local are appended to the search domains before they query.
See the following example with the /etc/resolv.conf setting of the application pod:
nameserver 10.100.0.10 search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal options ndots:5
CoreDNS looks for five dots in the domain that's queried. If the pod makes a DNS resolution call for amazon.com, then your logs look similar to the following:
[INFO] 192.168.3.71:33238 - 36534 "A IN amazon.com.default.svc.cluster.local. udp 54 false 512" NXDOMAIN qr,aa,rd 147 0.000473434s [INFO] 192.168.3.71:57098 - 43241 "A IN amazon.com.svc.cluster.local. udp 46 false 512" NXDOMAIN qr,aa,rd 139 0.000066171s [INFO] 192.168.3.71:51937 - 15588 "A IN amazon.com.cluster.local. udp 42 false 512" NXDOMAIN qr,aa,rd 135 0.000137489s [INFO] 192.168.3.71:52618 - 14916 "A IN amazon.com.ec2.internal. udp 41 false 512" NXDOMAIN qr,rd,ra 41 0.001248388s [INFO] 192.168.3.71:51298 - 65181 "A IN amazon.com. udp 28 false 512" NOERROR qr,rd,ra 106 0.001711104s
Note: NXDOMAIN means that the domain record wasn't found, and NOERROR means that the domain record was found.
Every search domain is prepended with amazon.com before it makes the final call on the absolute domain at the end. The final domain name is appended with a dot (.) at the end, which makes it a fully qualified domain name. This means that for every external domain name query there might be four to five additional calls, which can overwhelm the CoreDNS pod.
To resolve this issue, either change ndots to 1, which looks only for single dot. Or, append a dot (.) at the end of the domain that's queried or used. For example:
nslookup example.com.
Consider VPC resolver (AmazonProvidedDNS) limits
The Amazon Virtual Private Cloud (Amazon VPC) resolver can accept only a maximum hard limit of 1024 packets per second per network interface. If more than one CoreDNS pod is on the same node, then the chances of hitting this limit are higher for external domain queries.
To use PodAntiAffinity (from the Kubernetes website) rules to schedule CoreDNS pods on separate instances, add the following options to the CoreDNS deployment:
podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname weight: 100
Use tcpdump to capture CoreDNS packets from Amazon EKS worker nodes
Use the tcpdump tool to perform a packet capture to help diagnose DNS resolution issues. This tool helps validate whether network traffic for DNS requests is reaching your CoreDNS pods and if there are any underlying network connectivity issues. To use tcpdump, complete the following steps.
1. Locate a worker node where a CoreDNS pod is running:
kubectl get pod -n kube-system -l k8s-app=kube-dns -o wide
2. Use SSH to connect to the worker node where a CoreDNS pod is running and install the tcpdump tool:
sudo yum install tcpdump –y
3. Locate the CoreDNS pod process ID on the worker node:
ps ax | grep coredns
4. From the worker node, perform a packet capture on the CoreDNS pod network to monitor network traffic on UDP port 53:
sudo nsenter -n -t PID tcpdump udp port 53
5. From a separate terminal, get the CoreDNS service and pod IPs:
kubectl describe svc kube-dns -n kube-system
Note: Note the service IP that's located in the 'IP' field and the pod IPs that's located in the 'Endpoints' field.
6. Launch a pod that you'll test the DNS service from. The following example uses an Ubuntu container image:
kubectl run ubuntu --image=ubuntu sleep 1d kubectl exec -it ubuntu sh
7. Use the nslookup tool to perform a DNS query to a domain, such as amazon.com:
nslookup amazon.com
Perform the same query explicitly against the CoreDNS service IP from step 5:
nslookup amazon.com COREDNS_SERVICE_IP
Perform the query against each of the CoreDNS pod IPs from step 5:
nslookup amazon.com COREDNS_POD_IP
Note: If you have multiple CoreDNS pods running, perform multiple queries so that at least one query is sent to the pod that you are capturing traffic from.
8. Review the packet capture results.
If you experience DNS query timeouts to the CoreDNS pod that you're monitoring and don't see the query in the packet capture, then you might have a network connectivity issue. Make sure to check the network reachability between worker nodes.
If you observe a DNS query timeout against a pod IP that you're not capturing from, then follow steps 2-4 to perform another packet capture on the related worker node.
To save the results of a packet capture for later reference, add the -w FILE_NAME flag to the tcpdump command. The following example writes the results to a file named capture.pcap:
tcpdump -w capture.pcap udp port 53
Related information
Ähnliche Videos

Relevanter Inhalt
- AWS OFFICIALAktualisiert vor 2 Monaten
- AWS OFFICIALAktualisiert vor 7 Monaten
- AWS OFFICIALAktualisiert vor 5 Monaten
- AWS OFFICIALAktualisiert vor 4 Monaten