Use HTTP between backend k8s services and AWS LoadBalancer Controller, and HTTPS between client and AWS LoadBalancer Controller

0

I want to use HTTP between backend k8s services and AWS LoadBalancer Controller, and HTTPS between client and AWS LoadBalancer Controller.

With my current below setup (ingress.yaml, main.tf, service.yaml) on every communication level I use HTTPS. The Argo CD website under argo.goldendevops.com works fine with HTTPS, but the website goldendevops.com that uses Nginx deployment and service doesn't work because with this setup I don't have setup certificates for Nginx and I get below error in Nginx pod (just my container is not prepared to get HTTPS):

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/07/13 17:21:29 [notice] 1#1: using the "epoll" event method
2023/07/13 17:21:29 [notice] 1#1: nginx/1.25.1
2023/07/13 17:21:29 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2023/07/13 17:21:29 [notice] 1#1: OS: Linux 5.10.184
2023/07/13 17:21:29 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:1048576
2023/07/13 17:21:29 [notice] 1#1: start worker processes
2023/07/13 17:21:29 [notice] 1#1: start worker process 21
2023/07/13 17:21:29 [notice] 1#1: start worker process 22
10.0.102.221 - - [13/Jul/2023:17:22:15 +0000] "\x16\x03\x01\x00\x90\x01\x00\x00\x8C\x03\x03\xF3l\xD0\xB5\xCF\x08F\xB6n\xE0Dq\x92o\xE5\x84\xE7_sT\xFD\xB986B\xDE\xAA\xA4\xCE\x1F\xE5\xDC\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"
10.0.102.221 - - [13/Jul/2023:17:22:23 +0000] "\x16\x03\x01\x00\x90\x01\x00\x00\x8C\x03\x03\x9DB\xBF\xBF\xAAD0\xB4\xF8\x1C\xA3s\x06\xC8\xEC!\xC2K\xC0\xF6[\x12\xA2\x03q\xE7\xDCR\x93\xB6M\xE3\x00\x00&\xC0+\xC0/\xC0#\xC0'\xC0\x09\xC0\x13\xC0,\xC00\xC0$\xC0(\xC0\x14\xC0" 400 157 "-" "-" "-"

Currently I use HTTPS also in internal communication, what I want to change to HTTP only. When I try to change the internal communication to HTTP by myself by below 3 steps then the Argo CD website starts getting error "ERR_TOO_MANY_REDIRECTS" and in Nginx container I get error "http: TLS Handshake error":

  • Change "alb.ingress.kubernetes.io/backend-protocol: HTTPS" to "HTTP"
  • Change services ports in ingress.yaml and servie.yaml to 80
  • Delete "alb.ingress.kubernetes.io/backend-protocol-version: HTTP2" (because it supports only HTTPS)

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/group.name: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/backend-protocol: HTTPS
    # Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
    alb.ingress.kubernetes.io/conditions.argogrpc: |
      [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
    alb.ingress.kubernetes.io/conditions.golden-devops-helm-release: |
      [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:11111111111
    alb.ingress.kubernetes.io/ssl-redirect: '443'
  name: argocd
  namespace: argocd
spec:
  rules:
  - host: argo.goldendevops.com
    http:
      paths:
      - path: /
        backend:
          service:
            name: argogrpc
            port:
              number: 443
            namespace: argocd
        pathType: Prefix
      - path: /
        backend:
          service:
            name: argocd-server
            port:
              number: 443
            namespace: argocd
        pathType: Prefix
  - host: goldendevops.com
    http:
      paths:
      - path: /
        backend:
          service:
            name: golden-devops-helm-release
            port:
              number: 443
            namespace: argocd
        pathType: Prefix
      - path: /
        backend:
          service:
            name: golden-devops-helm-release
            port:
              number: 443
            namespace: argocd
        pathType: Prefix
  tls:
  - hosts:
    - argo.goldendevops.com
    - goldendevops.com

Service for Argo CD (for Nginx is very similar):

apiVersion: v1
kind: Service
metadata:
  annotations:
    alb.ingress.kubernetes.io/backend-protocol-version: HTTP2 #This tells AWS to send traffic from the ALB using HTTP2. Can use GRPC as well if you want to leverage GRPC specific features
  labels:
    app: argogrpc
  name: argogrpc
  namespace: argocd
spec:
  ports:
  - name: "443"
    port: 443
    protocol: TCP
    targetPort: 8080
  selector:
    app.kubernetes.io/name: argocd-server
  sessionAffinity: None
  type: NodePort

main.tf:

locals {
  account_id = data.aws_caller_identity.current.account_id
  tags = {
    Name      = "${var.cluster_name}"
    Project   = "eks-demo"
    ManagedBy = "terraform"
  }
}

module "eks" {
  source                          = "terraform-aws-modules/eks/aws"
  version                         = "18.29.1"
  cluster_name                    = var.cluster_name
  cluster_version                 = var.cluster_version
  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true
  enable_irsa                     = true

  cluster_addons = {
    coredns = {
      resolve_conflicts = "OVERWRITE"
    }
    kube-proxy = {}
    vpc-cni = {
      resolve_conflicts = "OVERWRITE"
    }
  }

  vpc_id     = data.terraform_remote_state.vpc.outputs.vpc_id
  subnet_ids = data.terraform_remote_state.vpc.outputs.public_subnets

  manage_aws_auth_configmap = true

  aws_auth_users = [
    {
      userarn  = "arn:aws:iam::${local.account_id}:user/jakubszuber-admin"
      username = "cluster-admin"
      groups   = ["system:masters"]
    },
  ]

  # Extend cluster security group rules
  cluster_security_group_additional_rules = {
    egress_nodes_ephemeral_ports_tcp = {
      description                = "To node 1025-65535"
      protocol                   = "tcp"
      from_port                  = 1025
      to_port                    = 65535
      type                       = "egress"
      source_node_security_group = true
    }
  }

  node_security_group_additional_rules = {
    ingress_allow_access_from_control_plane = {
      type                          = "ingress"
      protocol                      = "tcp"
      from_port                     = 9443
      to_port                       = 9443
      source_cluster_security_group = true
      description                   = "Allow access from control plane to webhook port of AWS load balancer controller"
    }
    ingress_self_all = {
      description = "Node to node all ports/protocols"
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      type        = "ingress"
      self        = true
    }
    ingress_all = {
      description      = "Node all ingress"
      protocol         = "-1"
      from_port        = 0
      to_port          = 0
      type             = "ingress"
      cidr_blocks      = ["0.0.0.0/0"]
      ipv6_cidr_blocks = ["::/0"]
    }
    egress_all = {
      description      = "Node all egress"
      protocol         = "-1"
      from_port        = 0
      to_port          = 0
      type             = "egress"
      cidr_blocks      = ["0.0.0.0/0"]
      ipv6_cidr_blocks = ["::/0"]
    }
  }

  eks_managed_node_groups = {
    bottlerocket_nodes = {
      ami_type      = "BOTTLEROCKET_x86_64"
      platform      = "bottlerocket"
      min_size      = 1
      max_size      = 2
      desired_size  = 1      # TODO change those 3 numbers
      capacity_type = "SPOT"

      # this will get added to what AWS provides
      bootstrap_extra_args = <<-EOT
      # extra args added
      [settings.kernel]
      lockdown = "integrity"
      EOT
    }
  }
}

I would be so thankful for help I am working with that problem for many hours but because of my experience (I am 17) it's so problematic for me. I really would appreciate any help from your guys!

Btw. here is an article that I was using for my project: https://fewmorewords.com/eks-with-argocd-using-terraform and https://github.com/jayanath/k8-eks-argocd-terraform/

2 Answers
1
Accepted Answer

Yes, you can follow steps like these:

Step 1: Create a ClusterIssuer Create a file named cluster-issuer.yaml with the following content:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your-email@example.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: nginx

In this example, we are creating a ClusterIssuer named letsencrypt-prod that uses the HTTP-01 challenge type for domain validation. Adjust the email field to your email address.

Apply the ClusterIssuer configuration by running the following command:

kubectl apply -f cluster-issuer.yaml

Step 2: Create a Certificate resource Create a file named certificate.yaml with the following content:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: nginx-certificate
  namespace: your-namespace
spec:
  secretName: nginx-tls-secret
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  commonName: example.com
  dnsNames:
    - example.com

In this example, we are creating a Certificate resource named nginx-certificate in the specified namespace. Adjust the commonName and dnsNames fields to match your domain name(s).

Apply the Certificate configuration by running the following command:

kubectl apply -f certificate.yaml

Step 3: Wait for the certificate to be issued You can monitor the status of the certificate issuance by running the following command:

kubectl describe certificate nginx-certificate -n your-namespace

Wait until the certificate is issued, and the status shows that it is ready.

Step 4: Create a Secret After the certificate is issued, cert-manager will automatically create a Secret containing the SSL certificates.

To use the certificates in your Nginx deployment, you need to mount the Secret as a volume. Modify your existing Nginx deployment YAML file (nginx-deployment.yaml) to add the following volume and volume mount configurations:

spec:
  volumes:
    - name: nginx-tls-secret
      secret:
        secretName: nginx-tls-secret
  containers:
    - name: nginx
      image: nginx:latest
      ports:
        - containerPort: 80
          name: http
        - containerPort: 443
          name: https
      volumeMounts:
        - name: nginx-tls-secret
          mountPath: /etc/nginx/ssl
          readOnly: true

Replace the nginx-deployment.yaml file with the updated configuration and apply it to create or update your Nginx deployment:

kubectl apply -f nginx-deployment.yaml

With these steps, cert-manager will handle the certificate provisioning and automatic renewal. Your Nginx deployment will use the SSL certificates mounted from the Secret, enabling HTTPS access securely.

answered 10 months ago
1

To you use HTTP between backend Kubernetes services and the AWS LoadBalancer Controller, and HTTPS between the client and the AWS LoadBalancer Controller, you need to make the following changes:

Change the alb.ingress.kubernetes.io/backend-protocol annotation in your ingress.yaml file to HTTP instead of HTTPS:

metadata:
  annotations:
    ...
    alb.ingress.kubernetes.io/backend-protocol: HTTP
    ...

Update the service ports in your ingress.yaml and service.yaml files to use port 80 instead of 443. This change will reflect the use of HTTP:

spec:
  ports:
  - name: "80"
    port: 80
    protocol: TCP
    targetPort: 8080

You can remove the alb.ingress.kubernetes.io/backend-protocol-version annotation because it is only applicable when using HTTPS. After making these changes, you need to apply the updated configuration to your cluster. Once the changes are applied, the communication between the backend Kubernetes services and the AWS LoadBalancer Controller will use HTTP, and the communication between the client and the AWS LoadBalancer Controller will use HTTPS.

However, please note that if you access the Nginx deployment directly using HTTP, you will need to handle the TLS termination and obtain an SSL certificate for Nginx. Alternatively, you can use an ingress controller like Nginx Ingress or Traefik to handle TLS termination for you. These ingress controllers can automatically provision SSL certificates using services like Let's Encrypt.

answered 10 months ago
  • I access the Nginx deployment directly using HTTP so I just have to create k8s resources like Certificate Managerane, ClusterIssuer, Certificate, Secret, etc, right?

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions