Distributing pods evenly between worker nodes in EKS after scaling to and from 0

0

I am using AWS EKS 1.22. Our cluster has 2 worker nodes and we want to scale our nodegroup to 0 at night and then back to 2 again in working hours to save money. However, the first time I did so, the scheduler put 74/81 pods on the same node, causing it to shut down after some time (that node became in a "unknown" status). How can I make sure that EKS distributes my workload evenly between my worker nodes the next time I scale it to and from 0? Thanks in advance

  • Hi, I'm facing the same issue. Can you share us how you solved the problem?

1 個回答
0

Use pod anti-affinity rules to spread pods across nodes. This prevents scheduling pods from the same application onto the same node. You can choose "preferredDuringSchedulingIgnoredDuringExecution" to make it a soft constraint. You can also set pod topology spread constraints to evenly distribute pods matching certain labels. This helps enforce distribution.

https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/ https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ https://www.youtube.com/watch?v=hv8lHqRZFJA&ab_channel=LukondeMwila

AWS
dov
已回答 2 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南