Distributing pods evenly between worker nodes in EKS after scaling to and from 0

0

I am using AWS EKS 1.22. Our cluster has 2 worker nodes and we want to scale our nodegroup to 0 at night and then back to 2 again in working hours to save money. However, the first time I did so, the scheduler put 74/81 pods on the same node, causing it to shut down after some time (that node became in a "unknown" status). How can I make sure that EKS distributes my workload evenly between my worker nodes the next time I scale it to and from 0? Thanks in advance

  • Hi, I'm facing the same issue. Can you share us how you solved the problem?

1 回答
0

Use pod anti-affinity rules to spread pods across nodes. This prevents scheduling pods from the same application onto the same node. You can choose "preferredDuringSchedulingIgnoredDuringExecution" to make it a soft constraint. You can also set pod topology spread constraints to evenly distribute pods matching certain labels. This helps enforce distribution.

https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/ https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ https://www.youtube.com/watch?v=hv8lHqRZFJA&ab_channel=LukondeMwila

AWS
dov
已回答 2 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则