EKS-Update creates Nodes in different availability zones

0

We are running an eks-cluster in the Region ca-central-1. The eks-cluster is deployed with the terraform eks module (version = "18.24.1"). We have a managed node group with two ec2-instances, these were running in availability zones ca-central-1a and ca-central-1b. We now updated the eks-cluster to version 1.25. According to https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html this documentation, I would expect, that the new node would be created in the same availability zones. But when the update was completed, we had one node in ca-central-1a and one node in ca-central-1d. That is a Problem, because we had a persitent-volume (ebs) that was created in ca-central-1b and a statefulSet. After the update, the kube-scheduler could not schedule the statefulSet, because the statefulSet and the persitent-volume must be in the same availability zone.

thomas
已提問 1 年前檢視次數 662 次
1 個回答
0

Hi,

This is a common problem in the Kubernetes and is called PV topology-aware scheduling . On the clusters which uses Cluster Autoscaler, CAS doesn't have a way to provision nodes in particular zone (github issue ), if the ASG's are spread across AZ's. Karpener doesn't have this shortcoming .

Easiest fix would be to manually scale the ASG's till you get node in the AZ you want.

Thanks.

AWS
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南