How do I use multiple CIDR ranges with Amazon EKS?
I want to use multiple CIDR ranges with Amazon Elastic Kubernetes Service (Amazon EKS) to resolve issues with my pods.
Short description
Before you complete the steps in the Resolution section, confirm that you have the following:
- A running Amazon EKS cluster
- The latest version of the AWS Command Line Interface (AWS CLI)
- AWS Identity and Access Management (IAM) permissions to manage an Amazon Virtual Private Cloud (Amazon VPC)
- A kubectl with permissions to create custom resources and edit the DaemonsSet
- An installed version of jq on your system
Note: To download and install jq, see Download jq on the jq website. - A Unix-based system with a Bash shell
- A VPC that's already configured
Note:
- You can associate private (RFC 1918) and public (non-RFC 1918) CIDR blocks to your VPC before or after you create your cluster.
- In scenarios with carrier-grade network address translation (NAT), 100.64.0.0/10 is a private network range. The private network range is used in shared address space for communications between a service provider and its subscribers. For pods to communicate with the internet, you must have a NAT gateway configured at the route table. AWS Fargate clusters don't support DaemonSets. To add secondary CIDR ranges to Fargate profiles, use subnets from your VPC's secondary CIDR blocks. Then, tag your new subnets before you add the subnets to your Fargate profile.
Important: In some situations, Amazon EKS can't communicate with nodes that you launch in subnets from CIDR blocks that you add to a VPC after you create a cluster. When you add CIDR blocks to an existing cluster, the updated range can take up to 5 hours to appear.
Resolution
Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshoot AWS CLI errors. Also, make sure that you're using the most recent AWS CLI version.
In the following resolution, first set up your VPC. Then, configure the CNI plugin to use a new CIDR range.
Add more CIDR ranges to expand your VPC network
Complete the following steps:
-
Find your VPCs.
If your VPCs have a tag, then run the following command to find your VPC:VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=yourVPCName | jq -r '.Vpcs[].VpcId')
If your VPCs don't have a tag, then run the following command to list all the VPCs in your AWS Region:
aws ec2 describe-vpcs --filters | jq -r '.Vpcs[].VpcId'
-
To attach your VPC to a VPC_ID variable, run the following command:
export VPC_ID=vpc-xxxxxxxxxxxx
To associate another CIDR block with the range 100.64.0.0/16 to the VPC, run the following command:
aws ec2 associate-vpc-cidr-block --vpc-id $VPC_ID --cidr-block 100.64.0.0/16
Create subnets with a new CIDR range
Complete the following steps:
-
To list all the Availability Zones in your Region, run the following command:
aws ec2 describe-availability-zones --region us-east-1 --query 'AvailabilityZones[*].ZoneName'
Note: Replace us-east-1 with your Region.
-
Choose the Availability Zone where you want to add the subnets, and then assign the Availability Zone to a variable. For example:
export AZ1=us-east-1a export AZ2=us-east-1b export AZ3=us-east-1c
Note: To add more Availability Zones, create additional variables.
-
To create new subnets under the VPC with the new CIDR range, run the following commands:
CUST_SNET1=$(aws ec2 create-subnet --cidr-block 100.64.0.0/19 --vpc-id $VPC_ID --availability-zone $AZ1 | jq -r .Subnet.SubnetId) CUST_SNET2=$(aws ec2 create-subnet --cidr-block 100.64.32.0/19 --vpc-id $VPC_ID --availability-zone $AZ2 | jq -r .Subnet.SubnetId) CUST_SNET3=$(aws ec2 create-subnet --cidr-block 100.64.64.0/19 --vpc-id $VPC_ID --availability-zone $AZ3 | jq -r .Subnet.SubnetId)
-
(Optional) Set a key-value pair to add a name tag for your subnets. For example:
aws ec2 create-tags --resources $CUST_SNET1 --tags Key=Name,Value=SubnetA aws ec2 create-tags --resources $CUST_SNET2 --tags Key=Name,Value=SubnetB aws ec2 create-tags --resources $CUST_SNET3 --tags Key=Name,Value=SubnetC
Associate your new subnet to a route table
Complete the following steps:
-
To list the entire route table under the VPC, run the following command:
aws ec2 describe-route-tables --filters Name=vpc-id,Values=$VPC_ID |jq -r '.RouteTables[].RouteTableId'
-
To export the route table to the variable, run the following command:
export RTASSOC_ID=rtb-abcabcabc
Note: Replace rtb-abcabcabc with the values from the previous step.
-
Associate the route table to all new subnets. For example:
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET1 aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET2 aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET3
For more information, see the Routing section in Example: VPC with servers in private subnets and NAT.
Configure the CNI plugin to use the new CIDR range
Complete the following steps:
-
Add the latest version of the vpc-cni plugin to the cluster. To verify the version in the cluster, run the following command:
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
To turn on the custom network configuration for the CNI plugin, run the following command:
kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
-
To add the ENIConfig label to identify your worker nodes, run the following command:
kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=failure-domain.beta.kubernetes.io/zone
-
To create an ENIConfig custom resource for all subnets and Availability Zones, run the following commands:
cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ1 spec: subnet: $CUST_SNET1 EOF cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ2 spec: subnet: $CUST_SNET2 EOF cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ3 spec: subnet: $CUST_SNET3 EOF
Note: The ENIConfig must match the Availability Zone of your worker nodes.
-
Launch the worker nodes so that the CNI plugin (ipamd) can allocate IP addresses from the new CIDR range to the new worker nodes.
If you use custom networking, then the primary network interface isn't used for pod placement. In this case, you must first update max-pods with the following formula:maxPods = (number of interfaces - 1) * (max IPv4 addresses per interface - 1) + 2
If you use a self-managed node group, then follow the steps in Launching self-managed Amazon Linux nodes. Don't specify the subnets that you used in the ENIConfig resources. Instead, specify the following for the BootstrapArguments parameter:
--use-max-pods false --kubelet-extra-args '--max-pods=<20>'
If you use a manager node group without a launch template or a specified Amazon Machine Image (AMI) ID, then managed node groups automatically calculate the max pods value. Follow the steps in Creating a managed node group. Or, use the Amazon EKS CLI to create the managed node group:
aws eks create-nodegroup --cluster-name <sample-cluster-name> --nodegroup-name <sample-nodegroup-name> --subnets <subnet-123 subnet-456> --node-role <arn:aws:iam::123456789012:role/SampleNodeRole>
If you use a managed node group launch template with a specified AMI ID, then specify an Amazon EKS optimized AMI ID In your launch template. Or, use a custom AMI based on the Amazon EKS optimized AMI. Then, use a launch template to deploy the node group, and provide the following user data in the launch template:
#!/bin/bash /etc/eks/bootstrap.sh <my-cluster-name> --kubelet-extra-args <'--max-pods=20'>
-
Note the security group for the subnet, and apply the security group to the associated ENIConfig:
cat <<EOF | kubectl apply -f -apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ1 spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET1 EOF cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ2 spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET2 EOF cat <<EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: $AZ3 spec: securityGroups: - sg-xxxxxxxxxxxx subnet: $CUST_SNET3 EOF
Note: replace sg-xxxxxxxxxxxx with your security group.
-
Launch a new deployment to test the configuration:
kubectl create deployment nginx-test --image=nginx --replicas=10 kubectl get pods -o wide --selector=app=nginx-test
Note: In the preceding test deployment, ten new pods are added, and the new CIDR range is scheduled on new worker nodes.
It is so complex process in AWS. Same use-case needs hardly 3 steps in GCP. Do we have any other simplified way?
Thank you for your comment. We'll review and update the Knowledge Center article as needed.
Relevant content
- asked 2 years agolg...
- Accepted Answerasked a year agolg...
- Accepted Answerasked 2 years agolg...
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated 2 years ago