How can I automate the configuration of HTTP proxy for Amazon EKS containerd nodes?

5 minute read

I want to automate the configuration of HTTP proxy for Amazon Elastic Kubernetes Service (Amazon EKS) nodes with containerd runtime.

Short description

For managed node groups that you created in Amazon EKS versions 1.23 or earlier, the default container runtime is Docker. If this applies to you, then be sure to follow all steps in the resolution to specify a containerd runtime. For managed node groups that are created in Amazon EKS version 1.24 or later, the default container runtime in containerd.

To use containerd in your managed node group instead of dockerd, you must specify a containerd runtime in userdata.

After you switch your managed node group to a containerd runtime, create a custom launch template with your AMI ID. You can then configure the settings for your HTTP proxy and the environment values of your cluster.

Note: The following resolution applies only to nodes where the underlying runtime is containerd, and doesn't apply to nodes with Docker runtime. For nodes with Docker runtime, see How can I automate the configuration of HTTP proxy for Amazon EKS worker nodes with Docker?


Create a custom launch template

  1. Specify containerd as the runtime in your managed node group. In userdata, use the --container-runtime=containerd option for
  2. Create a custom launch template with the AMI ID. Otherwise, the managed nodes group merges userdata automatically when the AMI ID isn't specified.
  3. Set the proxy configuration to containerd, sandbox-image, and kubelet. Sandbox-image is the service unit that pulls the sandbox image for containerd. To set this configuration, see the sandbox-image.service and scripts on GitHub.
  4. You can now describe your userdata with the following fields:
    Note: Replace XXXXXXX:3128, YOUR_CLUSTER_CA, API_SERVER_ENDPOINT, and EKS_CLUSTER_NAME with your relevant proxy, cluster CA, server endpoint, and cluster name. You can add AWS service endpoints to NO_PROXY and no_proxy after you create their VPC endpoints.
MIME-Version: 1.0Content-Type: multipart/mixed; boundary="==BOUNDARY=="

Content-Type: text/cloud-boothook; charset="us-ascii"

#Set the proxy hostname and port
TOKEN=`curl -X PUT "" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
MAC=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v -s
VPC_CIDR=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v -s$MAC/vpc-ipv4-cidr-blocks | xargs | tr ' ' ',')

#Create the containerd and sandbox-image systemd directory
mkdir -p /etc/systemd/system/containerd.service.d
mkdir -p /etc/systemd/system/sandbox-image.service.d

#[Option] Configure yum to use the proxy
cloud-init-per instance yum_proxy_config cat << EOF >> /etc/yum.conf

#Set the proxy for future processes, and use as an include file
cloud-init-per instance proxy_config cat << EOF >> /etc/environment

#Configure Containerd with the proxy
cloud-init-per instance containerd_proxy_config tee <<EOF /etc/systemd/system/containerd.service.d/http-proxy.conf >/dev/null

#Configure sandbox-image with the proxy
cloud-init-per instance sandbox-image_proxy_config tee <<EOF /etc/systemd/system/sandbox-image.service.d/http-proxy.conf >/dev/null

#Configure the kubelet with the proxy
cloud-init-per instance kubelet_proxy_config tee <<EOF /etc/systemd/system/kubelet.service.d/proxy.conf >/dev/null

cloud-init-per instance reload_daemon systemctl daemon-reload 

Content-Type:text/x-shellscript; charset="us-ascii"

set -o xtrace

#Set the proxy variables before running the script
set -a
source /etc/environment

#Run the script

/etc/eks/ EKS_CLUSTER_NAME --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL --container-runtime containerd


Configure the proxy setting for aws-node and kube-proxy

Create a ConfigMap to configure the environment values. Then, apply it in your cluster. Use the following script as an example for your ConfigMap: Note: Replace KUBERNETES_SERVICE_CIDR_RANGE and VPC_CIDR_RANGE with the relevant values for your CIDR ranges. For example, replace KUBERNETES_SERVICE_CIDR_RANGE with, and replace VPC_CIDR_RANGE with You can add AWS service endpoints to NO_PROXY and no_proxy after you create their VPC endpoints.

apiVersion: v1
kind: ConfigMap


   name: proxy-environment-variables

   namespace: kube-system


   HTTP_PROXY: http://XXXXXXX:3128

   HTTPS_PROXY: http://XXXXXXX:3128


Then, set your HTTP proxy configuration to aws-node and kube-proxy:

$ kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "aws-node", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset aws-node
$ kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "kube-proxy", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset kube-proxy 

Create a managed node group

Create a new managed node group that uses the custom launch template that you previously created. Follow the steps in Creating a managed node group.

Test your proxy

To check the status of your nodes, run the following commands:

$ kubectl get nodes
        $ kubectl run test-pod --image=amazonlinux:2 --restart=Never -- sleep 300
        $ kubectl get pods -A

You receive an output similar to the following example:

$ kubectl get nodes -o wide
NAME                                                 STATUS   ROLES    AGE     VERSION                INTERNAL-IP       EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                 CONTAINER-RUNTIME

ip-192-168-100-114.ap-northeast-1.compute.internal   Ready    <none>   2m27s   v1.23.13-eks-fb459a0   <none>        Amazon Linux 2   5.4.219-126.411.amzn2.x86_64   containerd://1.6.6

$ kubectl run test-pod --image=amazonlinux:2 --restart=Never -- sleep 300

pod/test-pod created

$ kubectl get pods -A

NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE

default       test-pod                   1/1     Running   0          14s

kube-system   aws-node-cpjcl             1/1     Running   0          3m34s

kube-system   coredns-69cfddc4b4-c7rpd   1/1     Running   0          26m

kube-system   coredns-69cfddc4b4-z5jxq   1/1     Running   0          26m

kube-system   kube-proxy-g2f4g           1/1     Running   0          3m34s

Check your proxy log for additional information on your nodes' connectivity: TCP_TUNNEL/200 6230 CONNECT - HIER_DIRECT/XX.XX.XX.XX - TCP_TUNNEL/200 10359 CONNECT - HIER_DIRECT/XX.XX.XX.XX - TCP_TUNNEL/200 6633 CONNECT - HIER_DIRECT/XX.XX.XX.XX - TCP_TUNNEL/200 10353 CONNECT - HIER_DIRECT/XX.XX.XX.XX - TCP_TUNNEL/200 8767 CONNECT - HIER_DIRECT/XX.XX.XX.XX -

Related information

How do I provide access to other AWS Identity and Access Management (IAM) users and roles after cluster creation in Amazon EKS?

AWS OFFICIALUpdated 8 days ago

Keep in mind calls to the metadata service may require a token:

replied a month ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
replied a month ago