EKS-Update creates Nodes in different availability zones

0

We are running an eks-cluster in the Region ca-central-1. The eks-cluster is deployed with the terraform eks module (version = "18.24.1"). We have a managed node group with two ec2-instances, these were running in availability zones ca-central-1a and ca-central-1b. We now updated the eks-cluster to version 1.25. According to https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html this documentation, I would expect, that the new node would be created in the same availability zones. But when the update was completed, we had one node in ca-central-1a and one node in ca-central-1d. That is a Problem, because we had a persitent-volume (ebs) that was created in ca-central-1b and a statefulSet. After the update, the kube-scheduler could not schedule the statefulSet, because the statefulSet and the persitent-volume must be in the same availability zone.

thomas
質問済み 6ヶ月前378ビュー
1回答
0

Hi,

This is a common problem in the Kubernetes and is called PV topology-aware scheduling . On the clusters which uses Cluster Autoscaler, CAS doesn't have a way to provision nodes in particular zone (github issue ), if the ASG's are spread across AZ's. Karpener doesn't have this shortcoming .

Easiest fix would be to manually scale the ASG's till you get node in the AZ you want.

Thanks.

AWS
回答済み 6ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ