EKS-Update creates Nodes in different availability zones

0

We are running an eks-cluster in the Region ca-central-1. The eks-cluster is deployed with the terraform eks module (version = "18.24.1"). We have a managed node group with two ec2-instances, these were running in availability zones ca-central-1a and ca-central-1b. We now updated the eks-cluster to version 1.25. According to https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html this documentation, I would expect, that the new node would be created in the same availability zones. But when the update was completed, we had one node in ca-central-1a and one node in ca-central-1d. That is a Problem, because we had a persitent-volume (ebs) that was created in ca-central-1b and a statefulSet. After the update, the kube-scheduler could not schedule the statefulSet, because the statefulSet and the persitent-volume must be in the same availability zone.

thomas
gefragt vor 6 Monaten377 Aufrufe
1 Antwort
0

Hi,

This is a common problem in the Kubernetes and is called PV topology-aware scheduling . On the clusters which uses Cluster Autoscaler, CAS doesn't have a way to provision nodes in particular zone (github issue ), if the ASG's are spread across AZ's. Karpener doesn't have this shortcoming .

Easiest fix would be to manually scale the ASG's till you get node in the AZ you want.

Thanks.

AWS
beantwortet vor 6 Monaten

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen