How do I use multiple CIDR ranges with Amazon EKS?
I want to use multiple CIDR ranges with Amazon Elastic Kubernetes Service (Amazon EKS) to address issues with my pods. For example, how do I run pods with different CIDR ranges added to my Amazon Virtual Private Cloud (Amazon VPC)? Also, how can I add more IP addresses to my subnet when my subnet runs out of IP addresses? Finally, how can I be sure that pods running on worker nodes have different IP ranges?
Short description
Before you complete the steps in the Resolution section, confirm that you have the following:
- A running Amazon EKS cluster
- Access to a version (no earlier than 1.16.284) of the AWS Command Line Interface (AWS CLI)
- AWS Identity and Access Management (IAM) permissions to manage an Amazon VPC
- A kubectl with permissions to create custom resources and edit the DaemonsSet
- An installed version of jq (from the jq website) on your system
- A Unix-based system with a Bash shell
Keep in mind:
- You can associate private (RFC 1918) and public (non-RFC 1918) CIDR blocks to your VPC before or after you create your cluster.
- In scenarios with carrier-grade network address translation (NAT), 100.64.0.0/10 is a private network range. This private network range is used in shared address space for communications between a service provider and its subscribers. You must have a NAT gateway configured at the route table for pods to communicate with the internet. Daemonsets aren't supported on AWS Fargate clusters. To add secondary CIDR ranges to Fargate profiles, use subnets from your VPC's secondary CIDR blocks. Then, tag your new subnets before adding the subnets to your Fargate profile.
Important: In certain circumstances, Amazon EKS can't communicate with nodes launched in subnets from additional CIDR blocks added to a VPC after a cluster is created. An updated range caused by adding CIDR blocks to an existing cluster can take as long as five hours to appear.
Resolution
Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.
In the following resolution, first set up your VPC. Then, configure the CNI plugin to use a new CIDR range.
Add additional CIDR ranges to expand your VPC network
1. Find your VPCs.
If your VPCs have a tag, then run the following command to find your VPC:
VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=yourVPCName | jq -r '.Vpcs[].VpcId')
If your VPCs don't have a tag, then run the following command to list all the VPCs in your AWS Region:
aws ec2 describe-vpcs --filters | jq -r '.Vpcs[].VpcId'
2. To attach your VPC to a VPC_ID variable, run the following command:
export VPC_ID=vpc-xxxxxxxxxxxx
3. To associate an additional CIDR block with the range 100.64.0.0/16 to the VPC, run the following command:
aws ec2 associate-vpc-cidr-block --vpc-id $VPC_ID --cidr-block 100.64.0.0/16
Create subnets with a new CIDR range
1. To list all the Availability Zones in your AWS Region, run the following command:
aws ec2 describe-availability-zones --region us-east-1 --query 'AvailabilityZones[*].ZoneName'
Note: Replace us-east-1 with your AWS Region.
2. Choose the Availability Zone where you want to add the subnets, and then assign those Availability Zones to variables. For example:
export AZ1=us-east-1a export AZ2=us-east-1b export AZ3=us-east-1c
Note: You can add more Availability Zones by creating more variables.
3. To create new subnets under the VPC with the new CIDR range, run the following commands:
CUST_SNET1=$(aws ec2 create-subnet --cidr-block 100.64.0.0/19 --vpc-id $VPC_ID --availability-zone $AZ1 | jq -r .Subnet.SubnetId) CUST_SNET2=$(aws ec2 create-subnet --cidr-block 100.64.32.0/19 --vpc-id $VPC_ID --availability-zone $AZ2 | jq -r .Subnet.SubnetId) CUST_SNET3=$(aws ec2 create-subnet --cidr-block 100.64.64.0/19 --vpc-id $VPC_ID --availability-zone $AZ3 | jq -r .Subnet.SubnetId)
Tag the new subnets
For clusters running on Kubernetes 1.18 and earlier, you must tag all subnets so that Amazon EKS can discover the subnets.
Note: Amazon EKS supports the automatic discovery of subnets without any kubernetes.io tags starting at Kubernetes version 1.19. For more information, see the changelog on the Kubernetes GitHub site.
1. (Optional) Add a name tag for your subnets by setting a key-value pair. For example:
aws ec2 create-tags --resources $CUST_SNET1 --tags Key=Name,Value=SubnetA aws ec2 create-tags --resources $CUST_SNET2 --tags Key=Name,Value=SubnetB aws ec2 create-tags --resources $CUST_SNET3 --tags Key=Name,Value=SubnetC
2. For clusters running on Kubernetes 1.18 and below, tag the subnet for discovery by Amazon EKS. For example:
aws ec2 create-tags --resources $CUST_SNET1 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared aws ec2 create-tags --resources $CUST_SNET2 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared aws ec2 create-tags --resources $CUST_SNET3 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared
Note: Replace yourClusterName with the name of your Amazon EKS cluster.
If you're planning to use Elastic Load Balancing, then consider adding additional tags.
Associate your new subnet to a route table
1. To list the entire route table under the VPC, run the following command:
aws ec2 describe-route-tables --filters Name=vpc-id,Values=$VPC_ID |jq -r '.RouteTables[].RouteTableId'
2. For the route table that you want to associate with your subnet, run the following command to export to the variable. Then, replace rtb-xxxxxxxxx with the values from step 1:
export RTASSOC_ID=rtb-xxxxxxxxx
3. Associate the route table to all new subnets. For example:
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET1 aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET2 aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET3
For more information, see Routing.
Configure the CNI plugin to use the new CIDR range
1. Make sure that the latest recommended version of the vpc-cni plugin is running in the cluster.
To verify the version that's running in the cluster, run the following command:
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2<br>
To check the latest recommended version of vpc-cni, and update the plugin if needed, see Updating the Amazon VPC CNI plugin for Kubernetes add-on.
2. To turn on custom network configuration for the CNI plugin, run the following command:
kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
3. To add the ENIConfig label for identifying your worker nodes, run the following command:
kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=failure-domain.beta.kubernetes.io/zone
4. To create an ENIConfig custom resource for all subnets and Availability Zones, run the following commands:
cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ1 spec: subnet: $CUST_SNET1 EOF cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ2 spec: subnet: $CUST_SNET2 EOF cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ3 spec: subnet: $CUST_SNET3 EOF
Note: The ENIConfig must match the Availability Zone of your worker nodes.
5. Launch the new worker nodes.
Note: This step allows the CNI plugin (ipamd) to allocate IP addresses from the new CIDR range on the new worker nodes.
When using custom networking, the primary network interface isn't used for pod placement. In this case, you must first update max-pods using the following formula:
maxPods = (number of interfaces - 1) * (max IPv4 addresses per interface - 1) + 2
- For a self-managed node group: Deploy the node group using the instructions in Launching self-managed Amazon Linux nodes. Don’t specify the subnets that you used in the ENIConfig resources that you deployed. Instead, specify the following text for the BootstrapArguments parameter:
--use-max-pods false --kubelet-extra-args '--max-pods=<20>'
- For a managed node group without a launch template or with a launch template without an AMI ID specified: Managed node groups automatically calculate the Amazon EKS recommended max pods value. Follow the steps in Creating a managed node group. Or, use the Amazon EKS CLI to create the managed node group:
aws eks create-nodegroup --cluster-name <sample-cluster-name> --nodegroup-name <sample-nodegroup-name> --subnets <subnet-123 subnet-456> --node-role <arn:aws:iam::123456789012:role/SampleNodeRole>
Note: For the subnet field, don’t specify the subnets that you specified in the ENIConfig resources. More options can be specified as needed.
- For a managed node group with a launch template with a specified AMI ID: Provide the **'--max-pods=
’** extra argument as user data in the launch template. In your launch template, specify an Amazon EKS optimized AMI ID, or a custom AMI built off the Amazon EKS optimized AMI. Then, deploy the node group using a launch template and provide the following user data in the launch template:
#!/bin/bash /etc/eks/bootstrap.sh <my-cluster-name> --kubelet-extra-args <'--max-pods=20'>
6. After creating your node group, note the security group for the subnet and apply the security group to the associated ENIConfig.
In the following example, replace sg-xxxxxxxxxxxx with your security group:
cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ1 spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET1 EOF cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ2 spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET2 EOF cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ3 spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET3 EOF
7. Terminate the old worker nodes. Then, test the configuration by launching a new deployment. Ten new pods are added, and the new CIDR range is scheduled on new worker nodes:
kubectl create deployment nginx-test --image=nginx --replicas=10 kubectl get pods -o wide --selector=app=nginx-test

It is so complex process in AWS. Same use-case needs hardly 3 steps in GCP. Do we have any other simplified way?
Thank you for your comment. We'll review and update the Knowledge Center article as needed.
Relevant content
- asked 2 months agolg...
- asked a year agolg...
- Accepted Answerasked 2 years agolg...
- AWS OFFICIALUpdated 10 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 8 months ago