EKS Managed Node Groups - PodEvicitionFailure

0

I receive an error when attempting to upgrade a managed node group. I don't see anything in the documentation highlighting this specific error.

Currently the error reads:

PodEvictionFailure

Reached max retries while trying to evict pods from node ip-10-50-20-101.us-east-2.compute.internal in node group GrowOps-NodeGroup-1

istio-telemetry-55c8559456-86klx

Thanks for any assistance.

Regards

Edited by: eaf-joseph on Mar 19, 2020 8:36 AM

Edited by: eaf-joseph on Mar 19, 2020 8:37 AM

質問済み 4年前3923ビュー
1回答
1

Hi Joseph, I know it has been a long time already, but in case you still wanted to know the root cause it is because by default Amazon EKS created AutoScaingGroups are using "Rolling Updates" strategy for upgrading the EKS cluster nodes. This is done to make sure that you don't have any interruptions in your workload while the upgrade is being done behind the scene. If you have deployed PodDisruptionBudgets that makes the pod rejects the eviction request then you will get this error because in that case this pod has to be forced to evict. You can do it with kubectl or just when you try to upgrade your EKS Cluster nodes make sure to select "Force Update" strategy, this will force the pods to evict because the EKS node will go away anyway and will not respect the PodDisruptionBudgets for you. But before doing so, please be aware that this will cause disruption to your workload i.e. a potential downtime.

Edited by: ahmedelgamal on May 7, 2021 2:39 AM

回答済み 3年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ