By using AWS re:Post, you agree to the Terms of Use

Give cluster admin access to EKS worker nodes.

0

We have a EKS cluster running 1.21 version. We want to give admin access to worker nodes. We modified the aws-auth config map and added "system:masters" for eks worker nodes role. Below is the code snipped for the modified configmap.

data:
  mapRoles: |
    - groups:
      - system:nodes
      - system:bootstrappers
      - system:masters
      rolearn: arn:aws:iam::686143527223:role/terraform-eks-worker-node-role
      username: system:node:{{EC2PrivateDNSName}}

After adding this section the EKS worker nodes got admin access to cluster successfully. But in EKS dashboard the nodegroups are in degraded state. It shows below error in Health issues section.

Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.

3 Answers
0

Hi @abhinav,

Confirm that your control plane's security group and worker node security group are configured with the recommended settings for inbound and outbound traffic. Also, confirm that your custom network ACL rules are configured to allow traffic to and from "0.0.0.0/0" for ports 80,443 and 1025-65535.

Refer- https://aws.amazon.com/premiumsupport/knowledge-center/eks-worker-nodes-cluster/


If the Answer is helpful, please click Accept Answer & UPVOTE, this can be beneficial to other community members.

profile picture
answered 21 days ago
  • It's all fine. Whenever I remove the line system:masters everything works fine. Something wrong with this line.

  • I removed the system:masters from EKS worker node role and gave access to jenkins agent pods via service account bound to cluster-admin role via ClusterRoleBinding and the error was gone.

0

I've tested this with EKS 1.22 and 1.23, and was able to reproduce. Besides the node group health issues surfaced in the console and via the below CLI command, I saw no actual degradation of the nodes.

aws eks describe-nodegroup --cluster-name <CLUSTER_NAME> --nodegroup-name <NG_NAME>
...
        "health": {
            "issues": [
                {
                    "code": "AccessDenied",
                    "message": "Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.",
                    "resourceIds": [
                        "eksctl-<CLUSTER_NAME>-nodegroup-<NG_NAME>-<NODE_INSTANCE_ROLE>"
                    ]
                }
            ]
        },

I think this error message is benign. While the console reports an unhealthy node group, the individual nodes show as healthy.

kubectl get nodes -oyaml | grep conditions -A 30
profile picture
answered 16 days ago
  • yeah all the working is good. Just the EKS dashboard shows unhealthy. Due to that we were not able to update the cluster and also was not able to run Terraform code because it gave error that cluster is unhealthy. I removed the system:masters from EKS worker node role and gave access to jenkins agent pods via service account bound to cluster-admin role via ClusterRoleBindinga and things worked fine and these errors were gone.

  • How were you able to reproduce this error?

0

I reproduced with your settings. I am looking deeper into this.

profile picture
answered 7 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions