Instances failed to join the kubernetes cluster

0

I am attempting to setup an EKS cluster, and have followed the documentation as much as possible. The cluster endpoint is both private and public, and my worker nodes will be in a private subnet.

I have a public subnet where I have a jumphost to connect to the worker nodes in the private subnet if I have to for debugging.

When I attempt to create a node group, the instance boots but fails to connect with the message "Instances failed to join the kubernetes cluster" in the UI. There is no more information anywhere, so I logged into the worker node from the jumphost, and this is what I see

Jul 14 10:06:31 ip-10-0-60-142 kubelet: F0714 10:06:31.010038 4491 server.go:273] failed to run Kubelet: could not init cloud provider "aws": error finding instance i-0e50417a226393598: "error listing AWS instances: "RequestError: send request failedncaused by: Post https://ec2.us-west-2.amazonaws.com/: dial tcp 54.240.249.157:443: i/o timeout""
Jul 14 10:06:31 ip-10-0-60-142 systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 14 10:06:31 ip-10-0-60-142 systemd: Unit kubelet.service entered failed state.
Jul 14 10:06:31 ip-10-0-60-142 systemd: kubelet.service failed.
Jul 14 10:06:36 ip-10-0-60-142 systemd: kubelet.service holdoff time over, scheduling restart.

From the message it looks like the kubelet is not able to connect to what seems to be public IP address for the API endpoint. Why should it connect to a public IP at all when I have enabled private access? What else is going wrong here? Can somebody from AWS help?

tlx
已提问 4 年前1066 查看次数
1 回答
0

Changing the global STS to valid in all AWS regions seems to work.
Above can be done at https://console.aws.amazon.com/iam/home#/account_settings

sanjit
已回答 3 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则