EKS Managed Node Groups - PodEvicitionFailure

0

I receive an error when attempting to upgrade a managed node group. I don't see anything in the documentation highlighting this specific error.

Currently the error reads:

PodEvictionFailure

Reached max retries while trying to evict pods from node ip-10-50-20-101.us-east-2.compute.internal in node group GrowOps-NodeGroup-1

istio-telemetry-55c8559456-86klx

Thanks for any assistance.

Regards

Edited by: eaf-joseph on Mar 19, 2020 8:36 AM

Edited by: eaf-joseph on Mar 19, 2020 8:37 AM

已提問 4 年前檢視次數 4046 次
1 個回答
1

Hi Joseph, I know it has been a long time already, but in case you still wanted to know the root cause it is because by default Amazon EKS created AutoScaingGroups are using "Rolling Updates" strategy for upgrading the EKS cluster nodes. This is done to make sure that you don't have any interruptions in your workload while the upgrade is being done behind the scene. If you have deployed PodDisruptionBudgets that makes the pod rejects the eviction request then you will get this error because in that case this pod has to be forced to evict. You can do it with kubectl or just when you try to upgrade your EKS Cluster nodes make sure to select "Force Update" strategy, this will force the pods to evict because the EKS node will go away anyway and will not respect the PodDisruptionBudgets for you. But before doing so, please be aware that this will cause disruption to your workload i.e. a potential downtime.

Edited by: ahmedelgamal on May 7, 2021 2:39 AM

已回答 3 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南