- Mais recentes
- Mais votos
- Mais comentários
Are you familiar with Private cluster requirements?
You can communicate with the K8s API by deploying EC2 instance inside that VPC and defining the EKS K8s API to your kubectl
.
Look like eksctl
right now needs the K8s API to be public while creating Managed Node groups so that it can verify the deployment. After that eksctl will turn the cluster API endpoint to private only. (Was: and those nodes need to be able to talk to internet while been created.) As Venkat notes in other answer, eksctl
temporarely keeps the API public so that the tool itself can communicate to K8s API.
So when you are using the template above, with existing VPCs, the ManagedNode group expects to have access to the public endpoint instead of private one.
When I used your template and removed the ManagedNode group the cluster was created successfully. After that I was able to use the EKS console (or CLI) to create managed nodegroup (using AmazonLinux2 instance) that were able to join this fully private cluster.
We do have an Terraform example how to create completely private repo that isn't exposed public any point of it's life cycle. You can find the workshop guiding you with this deployment here.
I would suggest creating an issue on eksctl Github repo on the missing features during the cluster creation.
Hello,
When you create a fully private cluster, eksctl will initially set the API Server endpoint to "public" to let eksctl CLI communicate with the API Server for checking node status, create required kubernetes objects and other components. Once the required steps are completed, it flips the API Server endpoint access to "private only" in the last step of the creation process.
Once the cluster is created, you will not be able to perform kubectl commands from outside the VPC since you won't have network access to the API server.
If you have a bastion host running within the VPC, you can run your kubectl commands on that bastion host to communicate with the cluster.
Please be advised that the nodes need to pull VPC-CNI and kube-proxy images from ECR during the node bootstrapping process. For this, you'll need to enable VPC Endpoints as mentioned in this doc for a fully private cluster.
To find out why your nodes are unable to join the cluster, login to one of the failed nodes and run journalctl -u kubelet
command to get the kubelet logs. By doing this, you'll be able to identify if the issue is related to networking, authentication or any other reason.
For further troubleshooting, please run the eks-log-collector script in your failed node to collect all the required logs to identify the problem.
If you are unable to find out the reason, please feel free to open a support case and provide the above mentioned logs and an AWS engineer can investigate your issue further.
I hope this helps!
Thanks @Venkat Penmetsa for the answer. You mentioned in the answer that "Once the cluster is created, you will not be able to perform kubectl commands from outside the VPC since you won't have network access to the API server". So I created an instance in the private subnet of the private VPC and then deployed my cluster from that instance. I have only created the cluster not any managed or self-managed nodes. So the problem is that now even though I am in the VPC, why I cant still run the kubectl command?
Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Conteúdo relevante
- AWS OFICIALAtualizada há 10 meses
- AWS OFICIALAtualizada há um ano
- AWS OFICIALAtualizada há 2 anos
- AWS OFICIALAtualizada há 2 anos
Just to add, looks like fully private cluster with
eksctl
works as long as I used AmazonLinux2 images and not the Ubuntu2004 image.Thanks @Toni_S for your very informative comment. Your answer is very clear but due to my lack of knowledge I am not able to get that how you managed to do "and defining the EKS K8s API to your kubectl." Is there any link or guide that I could follow to achieve it? Thanks again.