- Mais recentes
- Mais votos
- Mais comentários
It is now possible to have greater control over how many ENI's are grabbed by the CNI.
On the other hand, Kubernetes itself has a supported limit of 100 pods per node. Making some of the larger instances with lots of available addresses less attractive. While the pod per node limit IS configurable, I would not increase it without a very good reason. This implies that the best instances sizes are between 2xlarge-4xlarge, at least in terms of address allocation. Larger sizes may be better in terms performance, but you will not get any more useful addresses.
https://kubernetes.io/docs/setup/cluster-large/ No more than 100 pods per node
Q1) a m4.4xlarge node can have up to 8 ENIs, and each ENI can have up to 30 IP addresses. ( https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html ). This is 240 addresses maximum for pods. The warm pool is controlled by "WARM_ENI_TARGET", which is defaults to 1. This will imply 30 addresses in the warm pool (the number available to an ENI). However, at maximum addresses, there will be no warm pool left. The warm pool target is configurable by en ENV variable, and the algorithm has been tweaked recently. I would test this out to verify the actual numbers if there is concern.
Q2) Nothing is done automatically. You can run out of addresses in a K8 cluster. You may want to look into "cluster autoscaling" which will launch new hosts upon address exhaustion (or pod placement failure to be more accurate).
Q3) One address allocated to the ENI is considered the Primary, and it used for routing traffic out of the worker node. This is why you lose an address per ENI.
Conteúdo relevante
- Como posso anunciar rotas de VPC em uma conexão do Direct Connect para uma rede on-premises via BGP?AWS OFICIALAtualizada há 7 meses
- AWS OFICIALAtualizada há 6 meses
- AWS OFICIALAtualizada há 2 anos
- AWS OFICIALAtualizada há 2 meses