Hi there,
I've got an issue with an EKS node group where I cannot delete it.
I've provisioned the EKS cluster with Terraform and configured aws-auth as per documentation below:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
I've admin access to the cluster, I also have deployments running and worker nodes connected. My ConfigMap can be seen below:
$ kubectl describe configmap -n kube-system aws-auth
Name: aws-auth
Namespace: kube-system
Labels:
Annotations:
Data
====
mapRoles:
----
- rolearn: arn:aws:iam::000000000000:role/cluster-role
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::000000000000:role/node-group-role
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::000000000000:role/aws-reserved/sso.amazonaws.com/eu-west-2/AWSReservedSSO_AdministratorAccess
username: {{SessionName}}
groups:
- system:masters
Events: <none>
When I attempt to delete the node group, either via Terraform or using AWS Console, I get the following error listed under Health Issues:
"AccessDenied The aws-auth ConfigMap in your cluster is invalid."
I did not get this error when I created the node group, and I can't work out what exactly is wrong with my ConfigMap.
Any suggestions?