- Newest
- Most votes
- Most comments
after about 2 hours I got this message: Amazon EKS or one or more of your managed nodes is unable to to communicate with your Kubernetes cluster API server. This can happen if there are network disruptions or if API servers are timing out processing requests.
Hello,
I would recommend to follow the table with latest version of the Amazon EKS add-on type for each Kubernetes version here: https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
Check the Core DNS logs
kubectl logs -n kube-system <coredns-pod-name>
Verify the health of the Kubernetes API server. Check its logs for any errors or indications of performance issues:
kubectl logs -n kube-system -l component=kube-apiserver
Network connectivity to the API server:
Confirm there are no network disruptions between your managed EC2 nodes and the Kubernetes API server. Check security groups, network ACLs, and any firewalls that might be blocking communication.
Reference:
Updating the Amazon EKS coredns add-on: https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html#coredns-add-on-update
When I check the pod logs, I dont get anything. just this:
.:53
[INFO] plugin/reload: Running configuration SHA512 = 8fa7sds5f91s26s7f1s104abs049c
CoreDNS-1.10.1
linux/amd64, go1.20.4, a5539902
and for resource with kube-apiserver label, I've got nothing
root@DESKTOP:~# kubectl logs -n kube-system -l component=kube-apiserver
No resources found in kube-system namespace.
I've been facing this issue for a long time on a cluster with Kong Ingress installed.
If kong exists on the cluster and I try to deploy core-dns or update it via EKS API it gets stuck on the process and ends with this error: ClusterUnreachable | Amazon EKS or one or more of your managed nodes is unable to to communicate with your Kubernetes cluster API server. This can happen if there are network disruptions or if API servers are timing out processing requests.
Finally, I got it working without needing to delete Kong, by temporally scaling down kong-controller deployment and keeping up kong-gateway controller. This way kong keeps working and core-dns is updated. Then scale up again kong-controller.
I had the exact same issue on our EKS cluster with Kong Ingress Controller. The CoreDNS addon update was failing with the "ClusterUnreachable" error.
Root Cause: Kong's admission webhooks were timing out (10s timeout), causing the Kubernetes API server to appear slow/unresponsive during the CoreDNS update process.
Solution: Create a NetworkPolicy to allow webhook traffic on port 8080
Relevant content
- asked a year ago
- asked 4 years ago
- AWS OFFICIALUpdated 3 months ago

Facing the same problem and I have Kong installed What's the fix ?