EKS Network Load Balancer Service

0

Hello,

I have an EKS cluster (terraform code see below) and follow the guide to set up the Load Balancer Controller (https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html). But when I deploy the service (terraform code see below) and want to expose it via "LoadBalancer" it keeps in a pending state and no external adr. is available. The Load Balancer controller gives the following error:

Log Error from eksckubectl logs pod/aws-load-balancer-controller-5b57cdc6cc-dtjbg -n kube-system

{"level":"error","ts":1640857282.2362676,"logger":"controller-runtime.manager.controller.service","msg":"Reconciler error","name":"terraform-example","namespace":"default","error":"AccessDenied: User: arn:aws:sts::009661972061:assumed-role/my-cluster2021123008214425030000000b/i-0a40de3c4e8541004 is not authorized to perform: elasticloadbalancing:CreateTargetGroup on resource: arn:aws:elasticloadbalancing:eu-central-1:009661972061:targetgroup/k8s-default-terrafor-630f67813d/* because no identity-based policy allows the elasticloadbalancing:CreateTargetGroup action\n\tstatus code: 403, request id: 2491099a-a6fd-4e6f-bab8-3c758eda0d0b"}

If I add the AWSLoadBalancerControllerIAMPolicy to the my-cluster2021123008214425030000000b role manually it works. But as far as I read the documentation the AWSLoadBalancerControllerIAMPolicy is for the controller in the kube-system namespace and not the worker nodes.

Is there anything missing from the documentation? Or what is the intended way of solving this?

best regards rene

Terraform EKS:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }

  required_version = ">= 0.14.9"
}

provider "aws" {
  profile = "default"
  region  = "eu-central-1"
}

data "aws_eks_cluster" "eks" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "eks" {
  name = module.eks.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.eks.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.eks.token
}

module "eks" {

  source          = "terraform-aws-modules/eks/aws"

  cluster_version = "1.21"
  cluster_name    = "my-cluster"
  vpc_id          = "vpc-xx"
  subnets         = ["subnet-xx", "subnet-xx", "subnet-xx"]

  worker_groups = [
    {
      instance_type = "t3.medium"
      asg_max_size  = 5
      role_arn = "arn:aws:iam::xxx:role/worker-node-example"
    }
  ]
}

Terraform service:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.0.1"
    }
  }

  required_version = ">= 0.14.9"
}

provider "kubernetes" {
  host                   = "xxx"
  cluster_ca_certificate = base64decode("xxx")
  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    command     = "aws"
    args = [
      "eks",
      "get-token",
      "--cluster-name",
      "my-cluster"
    ]
  }
}
provider "aws" {
  profile = "default"
  region  = "eu-central-1"
}


resource "aws_sqs_queue" "gdpr_queue" {
  name                        = "terraform-example-queue.fifo"
  fifo_queue                  = true
  content_based_deduplication = true
  sqs_managed_sse_enabled = true
}

resource "aws_sqs_queue" "private_data_queue" {
  name                        = "terraform-example-queue.fifo"
  fifo_queue                  = true
  content_based_deduplication = true
  sqs_managed_sse_enabled = true
}


resource "aws_db_instance" "database" {
  allocated_storage    = 10
  engine               = "postgres"
  engine_version       = "13.3"
  instance_class       = "db.t3.micro"
  name                 = "mydb"
  username             = "foo"
  password             = "foobarbaz"
  skip_final_snapshot  = true
  vpc_security_group_ids = [aws_security_group.basic_security_group.id]

}
resource "aws_security_group" "basic_security_group" {
  name        = "allow rds connection"
  description = "Allow rds traffic"
  vpc_id      = "vpc-xxx"

  ingress {
    description      = "postgres"
    from_port        = 5432
    to_port          = 5432
    protocol         = "all"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

}

resource "kubernetes_service" "gdpr-hub-service" {
  metadata {
    name = "terraform-example"
    annotations = {
      "service.beta.kubernetes.io/aws-load-balancer-type" = "external"
      "service.beta.kubernetes.io/aws-load-balancer-nlb-target-type" = "ip"
      "service.beta.kubernetes.io/aws-load-balancer-scheme" : "internet-facing"
    }
  }

  spec {
    selector = {
      App = kubernetes_deployment.gdpr-hub-service-deployment.spec.0.template.0.metadata.0.labels.App
    }
    session_affinity = "ClientIP"
    port {
      port        = 80
      target_port = 8080
    }

    type = "LoadBalancer"
  }
}

resource "kubernetes_deployment" "gdpr-hub-service-deployment" {


  depends_on = [
    aws_db_instance.database,
    aws_sqs_queue.gdpr_queue,
    aws_sqs_queue.private_data_queue
  ]

  metadata {
    name = "gdpr-hub-service"

    labels = {
      App = "gdpr-hub-service"
    }
  }

  spec {
    replicas = 2
    selector {
      match_labels = {
        App = "gdpr-hub-service"
      }
    }
    template {
      metadata {
        labels = {
          App = "gdpr-hub-service"
        }
      }
      spec {
        container {
          image = "xxxx"
          name  = "gdpr-hub-service"

          port {
            container_port = 8080
          }

          resources {
            limits = {
              cpu    = "2"
              memory = "1024Mi"
            }
            requests = {
              cpu    = "250m"
              memory = "50Mi"
            }
          }
        }
      }
    }
  }
}

renes
已提问 2 年前1961 查看次数
2 回答
2

Hello there, EKS have a concept called IAM Roles for Service Account (IRSA). With IRSA, an IAM role can be assigned to a Service Account object in Kubernetes and Service Account gets assigned to a pod.

When AWS SDK (part of aws-load-balancer-controller pod) queries AWS API's, it detects the IRSA configuration and uses IAM role configured for the pod (by going through environment variables).

Worker node credentials act as fall back scenario so that node role can be used in case of IRSA configuration is not applied.

From above error message, it looks like you are missing IRSA configuration. Please check Step 3 from below doc,

If you have created IAM role and do not see the configuration, validate configuration using below commands,

  1. Check annotation for aws-load-balancer-controller Service Account
kubectl describe serviceaccount <aws-load-balancer-controller-sa--name-here>  -n kube-system
  1. Describe aws-load-balancer-controller pod and validate for IRSA environment variables
kubectl describe pod <aws-load-balancer-controller-pod-id-here> -n kube-system | grep -i "AWS_ROLE_ARN"
kubectl describe pod <aws-load-balancer-controller-pod-id-here> -n kube-system | grep -i "AWS_WEB_IDENTITY_TOKEN_FILE"
AWS
sai
已回答 2 年前
profile picture
专家
已审核 1 个月前
0

Hello,

thanks for your answer. My output is

➜ cluster git:(infrastructure-playground) ✗ kubectl describe serviceaccount aws-load-balancer-controller -n kube-system
Name: aws-load-balancer-controller Namespace: kube-system Labels: app.kubernetes.io/component=controller app.kubernetes.io/name=aws-load-balancer-controller Annotations: <none> Image pull secrets: <none> Mountable secrets: aws-load-balancer-controller-token-fdxw5 Tokens: aws-load-balancer-controller-token-fdxw5 Events: <none>

so there are no annotations. Step 3 only says:

eksctl create iamserviceaccount
--cluster=my_cluster
--namespace=kube-system
--name=aws-load-balancer-controller
--attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy
--override-existing-serviceaccounts
--approve

So should this create any annotations? If yes which one?

BR rene

已回答 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则