EKS-Update creates Nodes in different availability zones

0

We are running an eks-cluster in the Region ca-central-1. The eks-cluster is deployed with the terraform eks module (version = "18.24.1"). We have a managed node group with two ec2-instances, these were running in availability zones ca-central-1a and ca-central-1b. We now updated the eks-cluster to version 1.25. According to https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html this documentation, I would expect, that the new node would be created in the same availability zones. But when the update was completed, we had one node in ca-central-1a and one node in ca-central-1d. That is a Problem, because we had a persitent-volume (ebs) that was created in ca-central-1b and a statefulSet. After the update, the kube-scheduler could not schedule the statefulSet, because the statefulSet and the persitent-volume must be in the same availability zone.

thomas
已提问 1 年前649 查看次数
1 回答
0

Hi,

This is a common problem in the Kubernetes and is called PV topology-aware scheduling . On the clusters which uses Cluster Autoscaler, CAS doesn't have a way to provision nodes in particular zone (github issue ), if the ASG's are spread across AZ's. Karpener doesn't have this shortcoming .

Easiest fix would be to manually scale the ASG's till you get node in the AZ you want.

Thanks.

AWS
已回答 1 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则