AWS SSM Patch manager and EKS node groups

0

Hello,

I've been digging into the documentation but I cannot find and answer to my doubts. I've been trying to test it, but so far no reboots needed for my nodes.

I have an EKS cluster with two node groups. I'm using Patch manager (no quick setup). So far, I set-up my maintenance window to run weekly, to patch one node at time and to reboot if needed.

So my doubt is, when a node from the node group needs to be patched and restarted. Does the pods are moved to another node? what if my pod is busy? Is the node cordoned?

Thanks for your help.

JosRiv
preguntada hace 3 meses337 visualizaciones
1 Respuesta
2

The effectiveness of the draining process and the rescheduling of pods also depends on your application's specific setup and configuration. If you have multiple nodes and your pods have the appropriate tolerations and affinity rules, Kubernetes will reschedule your evicted pods to other healthy nodes. Before stopping an instance, AWS Systems Manager attempts to gracefully drain the instance. This involves cordoning the node to prevent new pods from being scheduled onto it and evicting existing pods to other nodes in the cluster. Pods are generally rescheduled to other healthy nodes during the draining process. Kubernetes tries to maintain the desired number of replicas for each deploymentYou can use features like PodDisruptionBudgets in Kubernetes to control the disruption caused by voluntary disruptions, such as draining nodes during maintenance.

Hope it clarifies and if does I would appreciate answer to be accepted so that community can benefit for clarity, thanks ;)

profile picture
EXPERTO
respondido hace 3 meses
profile picture
EXPERTO
Kallu
revisado hace 3 meses

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas