Trying to create a Private Managed node EKS cluster

0

I am trying to learn how to create a Private networking Managed Node Cluster in EKS but keep running in to missing things. In my latest iteration from the code bellow I now have a Cluster with a Managed Node group (no nodes show up in the EKS GUI but do show up in EC2). Also, when I now try to run kubctl commands I get the following:

E0508 14:10:01.658189 2481850 memcache.go:265] couldn't get current server API group list: Get "https://AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com/api?timeout=32s": dial tcp: lookup AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com on 10.102.74.21:53: no such host E0508 14:10:01.661965 2481850 memcache.go:265] couldn't get current server API group list: Get "https://AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com/api?timeout=32s": dial tcp: lookup AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com on 10.102.74.21:53: no such host E0508 14:10:01.665651 2481850 memcache.go:265] couldn't get current server API group list: Get "https://AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com/api?timeout=32s": dial tcp: lookup AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com on 10.102.74.21:53: no such host E0508 14:10:01.669504 2481850 memcache.go:265] couldn't get current server API group list: Get "https://AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com/api?timeout=32s": dial tcp: lookup AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com on 10.102.74.21:53: no such host E0508 14:10:01.672827 2481850 memcache.go:265] couldn't get current server API group list: Get "https://AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com/api?timeout=32s": dial tcp: lookup AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com on 10.102.74.21:53: no such host Unable to connect to the server: dial tcp: lookup AFD90ASFU0890W4ERW.gr7.us-east-1.eks.amazonaws.com on 10.102.74.21:53: no such host ~

I get no other errors either on the command line from the eksctl commands or in Cloudformation. Can some one help.

eksctl create cluster --dumpLogs --verbose 5 --auto-kubeconfig --timeout 60m -f 01-tt-dev-us1-eks-pipeline-prod.yaml

cat 01-tt-dev-us1-eks-pipeline-prod.yaml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: 'ttdev-us1-pipe-prod'
  region: 'us-east-1'
  version: '1.26'
  tags:
    CostCenter: 'DEV'
    Environment: 'Production'
    NMS-Management: 'Enabled'
    Security: 'DEV'

vpc:
  id: "vpc-00000000000000"
  cidr: "172.00.00.0/20"
  subnets:
    private:
      frontend-a:
        id: "subnet-11111111111"
      frontend-b:
        id: "subnet-22222222222"
  clusterEndpoints:
    privateAccess: true
    publicAccess: false
  securityGroup: "sg-333333333333"

cloudWatch:
  clusterLogging:
    enableTypes: [ 'api', 'audit', 'authenticator', 'controllerManager', 'scheduler' ]

secretsEncryption:
  keyARN: 'arn:aws:kms:us-east-1:444444444444444:key/98709ef0-000000-44bd-000000-e1c4ab753a6f'

eksctl create nodegroup --dumpLogs --verbose 5 --timeout 60m -f 05-tt-dev-us1-eks-pipeline-prod-nodegroup.yaml

cat 05-tt-dev-us1-eks-pipeline-prod-nodegroup.yaml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: 'ttdev-us1-pipe-prod'
  region: 'us-east-1'
  version: '1.26'
  tags:
    CostCenter: 'DEV'
    Environment: 'Production'
    NMS-Management: 'Enabled'
    Security: 'DEV'

vpc:
  id: "vpc-00000000000000"
  cidr: "172.00.00.0/20"
  subnets:
    private:
      frontend-a:
        id: "subnet-11111111111"
      frontend-b:
        id: "subnet-22222222222"
  clusterEndpoints:
    privateAccess: true
    publicAccess: false
  securityGroup: "sg-333333333333"

managedNodeGroups:
  - name: jenkins-pipeline-mng-1
    desiredCapacity: 2
    privateNetworking: true
    instanceType: t3.small
    subnets:
      - 'frontend-a'
      - 'frontend-b'
    tags:
      Name: ttdev-us1-eks-tmci-pipeline
      cluster-name: ttdev-us1-eks-tmci-prod
      nodegroup-name: jenkins-pipe-mng-1
    ssh:
      allow: true
      publicKeyName: 'DEV US-East-1 VPC Instance'
    iam:
      withAddonPolicies:
        cloudWatch: true
        externalDNS: true
    volumeSize: 80
    volumeType: gp3
    volumeEncrypted: true
    volumeKmsKeyID: 'arn:aws:kms:us-east-1:444444444444444:key/e9713f2f-000000-4caf-000000-17d27d2b1a91'


cloudWatch:
  clusterLogging:
    enableTypes: [ 'api', 'audit', 'authenticator', 'controllerManager', 'scheduler' ]

secretsEncryption:
  keyARN: 'arn:aws:kms:us-east-1:444444444444444:key/98709ef0-000000-44bd-000000-e1c4ab753a6f'

eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=ttdev-us1-pipe-prod --approve eksctl create addon --force --dumpLogs --verbose 5 --timeout 60m -f 20-tt-dev-us1-eks-pipeline-prod-addons.yaml

cat 20-tm-dev-us1-eks-pipeline-prod-addons.yaml
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: 'ttdev-us1-pipe-prod'
  region: 'us-east-1'
  version: '1.26'
  tags:
    CostCenter: 'DEV'
    Environment: 'Production'
    NMS-Management: 'Enabled'
    Security: 'DEV'

iam:
  withOIDC: true

addons:
- name: vpc-cni
  attachPolicyARNs:
    - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
  version: latest
- name: coredns
  version: latest
- name: kube-proxy
  version: latest
- name: aws-ebs-csi-driver
  wellKnownPolicies:      # add IAM and service account
    ebsCSIController: true
asked a year ago463 views
1 Answer
0

Hello,

While creating a fully private EKS cluster using eksctl CLI, you will need to specify the below setting as described in the eksctl CLI docs

privateCluster:
  enabled: true

I don't see this setting on the ClusterConfig file that you have posted. Please add this setting and retry the operation and see if it helps.

Based on the error message, it looks like there is no network connectivity between the system where you are running the kubectl commands to the EKS API server.

Note: If your EKS cluster is only allowed private endpoint access, you can only perform kubectl commands from within the EKS VPC or from a network that is connected to the EKS VPC. For more info, please refer https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html

Please add a comment if you are still facing issues or need additional information. Thank you!

profile pictureAWS
SUPPORT ENGINEER
answered a year ago
  • So when I add: privateCluster: enabled: true

    2 things happen.

    1. I am told that in 01-tt-dev-us1-eks-pipeline-prod.yaml I cannot have privateCluster and # clusterEndpoints: privateAccess: true publicAccess: false

    So I commented out the clusterEndpoints lines. 2. CloudFormation errors out on this: Resource handler returned message: "route table rtb-0aca387cb0000000 already has a route with destination-prefix-list-id pl-6000000 (Service: Ec2, Status Code: 400, Request ID: c4d7f570-5a5e-40a2-aa02-d369f75a36da)" (RequestToken: fdd4d1d5-7905-9a9a-dacc-a5d3b4a1bb22, HandlerErrorCode: GeneralServiceException)

    The route table above is for the VPC being used and provides routes to the transit gateways. among other things. Not sure why its trying to mess with it.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions