Instances failed to join the kubernetes cluster


I am attempting to setup an EKS cluster, and have followed the documentation as much as possible. The cluster endpoint is both private and public, and my worker nodes will be in a private subnet.

I have a public subnet where I have a jumphost to connect to the worker nodes in the private subnet if I have to for debugging.

When I attempt to create a node group, the instance boots but fails to connect with the message "Instances failed to join the kubernetes cluster" in the UI. There is no more information anywhere, so I logged into the worker node from the jumphost, and this is what I see

Jul 14 10:06:31 ip-10-0-60-142 kubelet: F0714 10:06:31.010038 4491 server.go:273] failed to run Kubelet: could not init cloud provider "aws": error finding instance i-0e50417a226393598: "error listing AWS instances: "RequestError: send request failedncaused by: Post dial tcp i/o timeout""
Jul 14 10:06:31 ip-10-0-60-142 systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 14 10:06:31 ip-10-0-60-142 systemd: Unit kubelet.service entered failed state.
Jul 14 10:06:31 ip-10-0-60-142 systemd: kubelet.service failed.
Jul 14 10:06:36 ip-10-0-60-142 systemd: kubelet.service holdoff time over, scheduling restart.

From the message it looks like the kubelet is not able to connect to what seems to be public IP address for the API endpoint. Why should it connect to a public IP at all when I have enabled private access? What else is going wrong here? Can somebody from AWS help?

asked 2 years ago372 views
1 Answer

Changing the global STS to valid in all AWS regions seems to work.
Above can be done at

answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions