I am using terraform to deploy the AWS infrastructure. I stuck at EKS nodeGroup is deleting status . The health shows that IAM role is not found, I created that role again and tries to update nodeGroup from CLI and it says
"An error occurred (ResourceInUseException) when calling the UpdateNodegroupVersion operation: Nodegroup cannot be updated as it is currently not in Active State"
I am unable to delete cluster and nodeGroup from the CLI and from the terraform as well. In my terraform script nodeGroup have instance type is (SPOT - t3.medium) but when I checked nodeGroup from CLI I saw that the instance is ( on-demand t3.large). There is no such config in my terraform code.
main.tf:
module "vpc" {
source = "./modules/vpc"
cluster_name = var.cluster_name
}
module "eks" {
source = "./modules/eks"
cluster_name = var.cluster_name
private_subnet_ids = module.vpc.private_subnet_ids
subnet_ids = module.vpc.subnet_ids
}
module "karpenter" {
source = "./modules/karpenter"
eks-nodeName = module.eks.eks-data
eks-connect-provider-url = module.eks.aws_iam_openid_connect_provider
eks-connect-provider-arn = module.eks.aws_iam_openid_connect_provider-arn
cluster_name = module.eks.cluster_name
private-nodes = module.eks.aws_eks_node_group-private-nodes
}
nodes.tf
resource "aws_eks_node_group" "private-nodes" {
# count = var.delete_nodegroup ? 1 : 0
cluster_name = aws_eks_cluster.cluster.name
node_group_name = "private-nodes"
node_role_arn = aws_iam_role.nodes.arn
# subnet_ids = [
# aws_subnet.private-eu-west-1a.id,
# aws_subnet.private-eu-west-1b.id
# ]
subnet_ids = var.private_subnet_ids
capacity_type = "SPOT"
instance_types = ["t3.medium"]
scaling_config {
desired_size = 1
max_size = 6
min_size = 1
}
update_config {
max_unavailable = 1
}
labels = {
role = "general"
}
depends_on = [
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly,
]
# Allow external changes without Terraform plan difference
lifecycle {
ignore_changes = [scaling_config[0].desired_size]
}
used this command to delete the nodeGroup aws eks delete-nodegroup --cluster-name eks-gd360 --nodegroup-name private-nodes --region eu-west-1
I have followed these docs: https://docs.aws.amazon.com/eks/latest/userguide/delete-managed-node-group.html
deleting EKS cluster https://docs.aws.amazon.com/eks/latest/userguide/delete-cluster.html#w237aac13c27b9b3