Distributing pods evenly between worker nodes in EKS after scaling to and from 0

0

I am using AWS EKS 1.22. Our cluster has 2 worker nodes and we want to scale our nodegroup to 0 at night and then back to 2 again in working hours to save money. However, the first time I did so, the scheduler put 74/81 pods on the same node, causing it to shut down after some time (that node became in a "unknown" status). How can I make sure that EKS distributes my workload evenly between my worker nodes the next time I scale it to and from 0? Thanks in advance

  • Hi, I'm facing the same issue. Can you share us how you solved the problem?

1 Answer
0

Use pod anti-affinity rules to spread pods across nodes. This prevents scheduling pods from the same application onto the same node. You can choose "preferredDuringSchedulingIgnoredDuringExecution" to make it a soft constraint. You can also set pod topology spread constraints to evenly distribute pods matching certain labels. This helps enforce distribution.

https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/ https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ https://www.youtube.com/watch?v=hv8lHqRZFJA&ab_channel=LukondeMwila

AWS
dov
answered 2 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions