Hello,
in order to take advantage of higher pod density along with network policies we removed AWS VPC CNI add-on (aws-node) and rely on Calico CNI add-on (calico-node) for pod network management. After removing aws-node daemonset and terminating EKS worker nodes, new nodes were created by ASG and all pods scheduled.
My question, while most pods scheduled on the new nodes have been assigned IP address from Calico IP pool, as expected, a number of pods, mostly daemonset pods, have been assigned the same IP as the node IP.
Any help with this will be highly appreciated.
JS
Jason_S, thanks for that. hostNetwork spec is set to true in pods with node IP, indeed.
Obviously I'm new to AWS EKS and CNI in general. I guess hostNetwork is set to true on purpose, for example, calico-node pods require direct access to host network ?
This is due to a limitation of EKS (Unable to deploy Calico to control plane nodes), you can refer here https://projectcalico.docs.tigera.io/getting-started/kubernetes/managed-public-cloud/eks. Generally speaking hostNetwork is a bad idea from a security point of view and only trusted pods should have it enabled (even that is not recommended).
Additionally, not sure what pod density you are concerned about. For performance and reliability perspective we strongly discourage you from exceeding the limit such as in here https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt. However if it's an ENI imposed limit (i.e. # of ENIs attached to the instance) you can refer to the following blogpost - https://aws.amazon.com/blogs/containers/amazon-vpc-cni-increases-pods-per-node-limits/
Jason_S, many thanks for excellent answer. That helps a lot. Much appreciated.