Thanks for the additional context! I assume you are using different node pools for each of your "backend, frontend, etc". So in that situation you can actually use the CNI ENI_CONFIG_LABEL_DEF configuration value to specify node labels instead of annotations for the ENIConfig name. From there use can use the EKS Optimized AMI bootstrap.sh script to pass in node labels via the kubelet-extra-args option. This could be done in the launch template user data for the Managed Node Group.
https://github.com/aws/amazon-vpc-cni-k8s#eni_config_label_def
https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data
When using custom networking with the Amazon VPC CNI you are able to update your aws-node DaemonSet to automatically apply the ENIConfig for an Availability Zone to any new Amazon EC2 nodes created in your cluster. This is possible when you name your ENIConfigs with the same name as your Availability Zones, which it seems like you did.
The command to perform this is below...
kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone
Here is also a link to the documentation describing this. Look at step 5 under the "Configure Kubernetes resources" section. https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html#custom-networking-configure-kubernetes
Hope that helps
Hey Ryan, thanks for your answer.
As we use a multi-zone (Frontend,Backend,Infrastructure,Data) and availability zone architecture we have like more then three ENIConfigs, so my example wasn't that accurate. Actually we use three subnets per Zone, so we needed to find a uniq name per ENIConfig....
pods-backend-eu-central-1a 6h28m
pods-backend-eu-central-1b 6h28m
pods-backend-eu-central-1c 6h28m
pods-frontend-eu-central-1a 6h28m
pods-frontend-eu-central-1b 6h28m
pods-frontend-eu-central-1c 6h28m
pods-infrastructure-eu-central-1a 6h28m
pods-infrastructure-eu-central-1b 6h28m
pods-infrastructure-eu-central-1c 6h28m
From what I understood is a subnet always bound to exactly one availability zone, right? This means that we either have to run all pods of a zone in exactly one availability zone and subnet OR we can only have three subnets which share all pods and our zone architecture will not work.
Hey Ryan, that was the resolving fact I missed.
I read in several guides and tutorials that this is only possible with annotations.
Also the ENI_CONFIG_LABEL_DEF
was not available until I saw that I did not use the most recent version of the VPC-CNI.
Relevant content
- asked 4 months ago
- asked 10 months ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a month ago