Distributing pods evenly between worker nodes in EKS after scaling to and from 0

0

I am using AWS EKS 1.22. Our cluster has 2 worker nodes and we want to scale our nodegroup to 0 at night and then back to 2 again in working hours to save money. However, the first time I did so, the scheduler put 74/81 pods on the same node, causing it to shut down after some time (that node became in a "unknown" status). How can I make sure that EKS distributes my workload evenly between my worker nodes the next time I scale it to and from 0? Thanks in advance

  • Hi, I'm facing the same issue. Can you share us how you solved the problem?

1 Antwort
0

Use pod anti-affinity rules to spread pods across nodes. This prevents scheduling pods from the same application onto the same node. You can choose "preferredDuringSchedulingIgnoredDuringExecution" to make it a soft constraint. You can also set pod topology spread constraints to evenly distribute pods matching certain labels. This helps enforce distribution.

https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/ https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ https://www.youtube.com/watch?v=hv8lHqRZFJA&ab_channel=LukondeMwila

AWS
dov
beantwortet vor 2 Monaten

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen