- Newest
- Most votes
- Most comments
I have found the solution. As I am working behind the proxy and has no outbound internet access, I need to pass the userdata using OverrideBootstrapCommand.
For nodegroups that have no outbound internet access, you'll need to supply --apiserver-endpoint and --b64-cluster-ca to the bootstrap script as follows:
overrideBootstrapCommand: |
#!/bin/bash
source /var/lib/cloud/scripts/eksctl/bootstrap.helper.sh
# Note "--node-labels=${NODE_LABELS}" needs the above helper sourced to work, otherwise will have to be defined manually.
/etc/eks/bootstrap.sh ${CLUSTER_NAME} --container-runtime containerd --kubelet-extra-args "--node-labels=${NODE_LABELS}" \
--apiserver-endpoint ${API_SERVER_URL} --b64-cluster-ca ${B64_CLUSTER_CA}
So when we run the bootstrap.helper.sh
script then it will automatically find the specified variables in above script and we don't need to do anything.
Note the --node-labels setting. If this is not defined, the node will join the cluster, but eksctl will ultimately time out on the last step when it's waiting for the nodes to be Ready. It's doing a Kubernetes lookup for nodes that have the label alpha.eksctl.io/nodegroup-name=<cluster-name>. This is only true for unmanaged nodegroups.
If you have deployed NAT or any other kind of gateways then The minimum that needs to be used when overriding so eksctl doesn't fail, is labels! eksctl relies on a specific set of labels to be on the node, so it can find them. means there is no need to provide --apiserver-endpoint
and --b64-cluster-ca
For more details, check this reference
Hello,
It can happen due to multiple reasons. To understand what is causing the issue, try to SSH into one of the ubuntu instances that failed to join the cluster and check the kubelet status by running systemctl status kubelet
. If kubelet is in active state, check kubelet logs for any errors by running journalctl -u kubelet
This troubleshooting article explains about various things to check when your nodes fail to join the cluster. Please check it out.
Relevant content
- asked 2 years ago
- asked a year ago
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 7 months ago
Thanks for the answer. I did ssh into the instance and ran the below command and got
systemctl status
Looks like you ran "systemctl status" which gives status of all the systemd services. Run "systemctl status kubelet" to check for kubelet status alone. If kubelet isn't started, it could be a problem with your UserData.
You are right that there is problem with my UserData. As I am working behind the proxy and don't have NAT or any other kind of gateway, I need to export no_proxy to my .bashrc file.
export no_proxy=cluster_API_Endpoints
Could you please let me know how can I export this no_proxy in my .bashrc file for EKSCTL cluster? I have tried multiple guides but I am not successful. How I am trying to add no_proxy to .bashrc file is this