EKS-Update creates Nodes in different availability zones

0

We are running an eks-cluster in the Region ca-central-1. The eks-cluster is deployed with the terraform eks module (version = "18.24.1"). We have a managed node group with two ec2-instances, these were running in availability zones ca-central-1a and ca-central-1b. We now updated the eks-cluster to version 1.25. According to https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html this documentation, I would expect, that the new node would be created in the same availability zones. But when the update was completed, we had one node in ca-central-1a and one node in ca-central-1d. That is a Problem, because we had a persitent-volume (ebs) that was created in ca-central-1b and a statefulSet. After the update, the kube-scheduler could not schedule the statefulSet, because the statefulSet and the persitent-volume must be in the same availability zone.

thomas
질문됨 6달 전377회 조회
1개 답변
0

Hi,

This is a common problem in the Kubernetes and is called PV topology-aware scheduling . On the clusters which uses Cluster Autoscaler, CAS doesn't have a way to provision nodes in particular zone (github issue ), if the ASG's are spread across AZ's. Karpener doesn't have this shortcoming .

Easiest fix would be to manually scale the ASG's till you get node in the AZ you want.

Thanks.

AWS
답변함 6달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠