Skip to content

How to Protect Specific Node from Termination in EKS Managed Node Group Scale-In

0

I have an EKS managed node group with 3 nodes. One of the nodes is currently running a critical DB pod (stateful workload). I need to reduce the node group’s desired size from 3 to 2, but I want to make sure that the node running the DB pod is not terminated during this scale-in operation. Question: What’s the best way to safeguard the node hosting the DB pod from being removed during the node group’s scale-down? I’m considering these steps: 1. Identify the node running the DB pod. 2. Manually cordon and taint the other two nodes to make them preferred for termination. Is this the right approach? Is there a better or more deterministic way to influence which node is removed when EKS scales down a managed node group?

  • I would argue that the correct option would be to not run this pod in this cluster. Having a database inside a cluster which can scale seems inherently dangerous. At the very least, make sure you have very frequent snapshots in case it dies

1 Answer
0

Hello Sahil,

I suppose a right thing to go might be to use StatefulSets in Kubernetes, in this way, the main writer-node will never be scaled down, if you configure minimum of pods to be greater than 1;

However, I suppose, you would also need to provision a persistent volume and persistent volume claim, using EBS volume of needed type (probably, io1/2 would be great for db needs);

Please correct me, if I am missing something!

answered 8 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.