We are running an eks-cluster in the Region ca-central-1. The eks-cluster is deployed with the terraform eks module (version = "18.24.1").
We have a managed node group with two ec2-instances, these were running in availability zones ca-central-1a and ca-central-1b.
We now updated the eks-cluster to version 1.25.
According to https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html this documentation, I would expect, that the new node would be created in the same availability zones.
But when the update was completed, we had one node in ca-central-1a and one node in ca-central-1d.
That is a Problem, because we had a persitent-volume (ebs) that was created in ca-central-1b and a statefulSet. After the update, the kube-scheduler could not schedule the statefulSet, because the statefulSet and the persitent-volume must be in the same availability zone.