AWS SSM Patch manager and EKS node groups

0

Hello,

I've been digging into the documentation but I cannot find and answer to my doubts. I've been trying to test it, but so far no reboots needed for my nodes.

I have an EKS cluster with two node groups. I'm using Patch manager (no quick setup). So far, I set-up my maintenance window to run weekly, to patch one node at time and to reboot if needed.

So my doubt is, when a node from the node group needs to be patched and restarted. Does the pods are moved to another node? what if my pod is busy? Is the node cordoned?

Thanks for your help.

JosRiv
質問済み 3ヶ月前337ビュー
1回答
2

The effectiveness of the draining process and the rescheduling of pods also depends on your application's specific setup and configuration. If you have multiple nodes and your pods have the appropriate tolerations and affinity rules, Kubernetes will reschedule your evicted pods to other healthy nodes. Before stopping an instance, AWS Systems Manager attempts to gracefully drain the instance. This involves cordoning the node to prevent new pods from being scheduled onto it and evicting existing pods to other nodes in the cluster. Pods are generally rescheduled to other healthy nodes during the draining process. Kubernetes tries to maintain the desired number of replicas for each deploymentYou can use features like PodDisruptionBudgets in Kubernetes to control the disruption caused by voluntary disruptions, such as draining nodes during maintenance.

Hope it clarifies and if does I would appreciate answer to be accepted so that community can benefit for clarity, thanks ;)

profile picture
エキスパート
回答済み 3ヶ月前
profile picture
エキスパート
Kallu
レビュー済み 3ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ