Distributing pods evenly between worker nodes in EKS after scaling to and from 0

0

I am using AWS EKS 1.22. Our cluster has 2 worker nodes and we want to scale our nodegroup to 0 at night and then back to 2 again in working hours to save money. However, the first time I did so, the scheduler put 74/81 pods on the same node, causing it to shut down after some time (that node became in a "unknown" status). How can I make sure that EKS distributes my workload evenly between my worker nodes the next time I scale it to and from 0? Thanks in advance

  • Hi, I'm facing the same issue. Can you share us how you solved the problem?

1개 답변
0

Use pod anti-affinity rules to spread pods across nodes. This prevents scheduling pods from the same application onto the same node. You can choose "preferredDuringSchedulingIgnoredDuringExecution" to make it a soft constraint. You can also set pod topology spread constraints to evenly distribute pods matching certain labels. This helps enforce distribution.

https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/ https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ https://www.youtube.com/watch?v=hv8lHqRZFJA&ab_channel=LukondeMwila

AWS
dov
답변함 2달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠