使用 AWS re:Post 即表示您同意 AWS re:Post 使用條款

Fully private eks cluster

0

Hi, I have a fully private VPC named HSCN without any internet access containing 2 public and 2 private subnets. This VPC is peered with another VPC let's say internet-vpc. I want to deploy my fully private eks cluster in the private subnet of HSCN-VPC. I have followed the private cluster requirements.

I am not deploying any pod so I don't need the repository yet. For the 2nd and 3rd requirement, eksctl takes care of it by itself.

The problem is when I deploy the cluster my node instances are failing to join. Secondly, my kubectl and eksctl commands time out. Which means I am not able to get cluster info or any node information.

Blow is my cluster config

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: test-cluster
  region: eu-west-2
  version: "1.23"

privateCluster:
  enabled: true
  additionalEndpointServices:
  - "autoscaling"

vpc:
  id: vpc-id
  subnets:
    private:
      hscn-1-subnet:
        id: subnet-id
      hscn-2-subnet:
        id: subnet-id
managedNodeGroups:
  - name: serv-test-1
    instanceType: m5.xlarge
    desiredCapacity: 1
    volumeType: gp2
    volumeSize: 50
    privateNetworking: true
    amiFamily: Ubuntu2004
    subnets:
      - hscn-2-subnet
    ssh:
      allow: true
    labels:
      role: role
    tags:
      nodegroup-role: testing

It is clear that my nodes and kubectl commands are not able to communicate to kubernetes api server endpoints.

Is there even a way to deploy a cluster in the setup like mentioned above? If yes, then please someone guide me how can I deploy fully functional cluster in this setup?

Thanks

2 個答案
0

Are you familiar with Private cluster requirements? You can communicate with the K8s API by deploying EC2 instance inside that VPC and defining the EKS K8s API to your kubectl.

Look like eksctl right now needs the K8s API to be public while creating Managed Node groups so that it can verify the deployment. After that eksctl will turn the cluster API endpoint to private only. (Was: and those nodes need to be able to talk to internet while been created.) As Venkat notes in other answer, eksctl temporarely keeps the API public so that the tool itself can communicate to K8s API.

So when you are using the template above, with existing VPCs, the ManagedNode group expects to have access to the public endpoint instead of private one.

When I used your template and removed the ManagedNode group the cluster was created successfully. After that I was able to use the EKS console (or CLI) to create managed nodegroup (using AmazonLinux2 instance) that were able to join this fully private cluster.

We do have an Terraform example how to create completely private repo that isn't exposed public any point of it's life cycle. You can find the workshop guiding you with this deployment here.

I would suggest creating an issue on eksctl Github repo on the missing features during the cluster creation.

profile pictureAWS
專家
已回答 2 年前
  • Just to add, looks like fully private cluster with eksctl works as long as I used AmazonLinux2 images and not the Ubuntu2004 image.

  • Thanks @Toni_S for your very informative comment. Your answer is very clear but due to my lack of knowledge I am not able to get that how you managed to do "and defining the EKS K8s API to your kubectl." Is there any link or guide that I could follow to achieve it? Thanks again.

0

Hello,

When you create a fully private cluster, eksctl will initially set the API Server endpoint to "public" to let eksctl CLI communicate with the API Server for checking node status, create required kubernetes objects and other components. Once the required steps are completed, it flips the API Server endpoint access to "private only" in the last step of the creation process.

Once the cluster is created, you will not be able to perform kubectl commands from outside the VPC since you won't have network access to the API server.

If you have a bastion host running within the VPC, you can run your kubectl commands on that bastion host to communicate with the cluster.

Please be advised that the nodes need to pull VPC-CNI and kube-proxy images from ECR during the node bootstrapping process. For this, you'll need to enable VPC Endpoints as mentioned in this doc for a fully private cluster.

To find out why your nodes are unable to join the cluster, login to one of the failed nodes and run journalctl -u kubelet command to get the kubelet logs. By doing this, you'll be able to identify if the issue is related to networking, authentication or any other reason.

For further troubleshooting, please run the eks-log-collector script in your failed node to collect all the required logs to identify the problem.

If you are unable to find out the reason, please feel free to open a support case and provide the above mentioned logs and an AWS engineer can investigate your issue further.

I hope this helps!

profile pictureAWS
支援工程師
已回答 2 年前
profile picture
專家
已審閱 1 年前
  • Thanks @Venkat Penmetsa for the answer. You mentioned in the answer that "Once the cluster is created, you will not be able to perform kubectl commands from outside the VPC since you won't have network access to the API server". So I created an instance in the private subnet of the private VPC and then deployed my cluster from that instance. I have only created the cluster not any managed or self-managed nodes. So the problem is that now even though I am in the VPC, why I cant still run the kubectl command? Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南