- Newest
- Most votes
- Most comments
Hello there, thank you for providing the details.
There can be several reasons why a worker node cannot join EKS Cluster. For example,
In the VPC for your cluster, the configuration parameter domain-name-servers should be set to AmazonProvidedDNS.
If you are using Public Subnets, your subnet's configuration of "Auto-assign public IP" must be enabled.
The Worker node security group should be configured to talk to control plane. etc.
I would encourage you to check this document - https://aws.amazon.com/premiumsupport/knowledge-center/eks-worker-nodes-cluster/
If you still have issues, please reach out to AWS Premium Support. Thank you.
Thank you for your response! I'll contact AWS premium support for further diagnosis.
-
I've used this script to troubleshoot the issue: https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-troubleshooteksworkernode.html
-
I got one error. The security group policies applied to the cluster were highly restrictive. It was not allowing traffic to flow from the worker nodes to the cluster. This was the only error. All the other tests passed.
-
I modified the security group to allow all inbound traffic from everywhere. I re-ran the script and the error was fixed. I then redeployed my worker node group but somehow they still didn't join the cluster.
-
I used network path analyser in AWS VPC. I tried to test 3 paths: a. user_plane worker node as the source, control_plane worker node as the destination b. control_plane worker node as the source, bastion host as the destination c. user_plane worker node as the source, bastion host as the destination All the 3 paths are functional
-
I checked the logs of the lambda function that is responsible for joining the worker nodes to the cluster but didn’t find any error!
Relevant content
- asked 5 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 months ago