EKS Managed Node Groups - PodEvicitionFailure

0

I receive an error when attempting to upgrade a managed node group. I don't see anything in the documentation highlighting this specific error.

Currently the error reads:

PodEvictionFailure

Reached max retries while trying to evict pods from node ip-10-50-20-101.us-east-2.compute.internal in node group GrowOps-NodeGroup-1

istio-telemetry-55c8559456-86klx

Thanks for any assistance.

Regards

Edited by: eaf-joseph on Mar 19, 2020 8:36 AM

Edited by: eaf-joseph on Mar 19, 2020 8:37 AM

已提问 4 年前4005 查看次数
1 回答
1

Hi Joseph, I know it has been a long time already, but in case you still wanted to know the root cause it is because by default Amazon EKS created AutoScaingGroups are using "Rolling Updates" strategy for upgrading the EKS cluster nodes. This is done to make sure that you don't have any interruptions in your workload while the upgrade is being done behind the scene. If you have deployed PodDisruptionBudgets that makes the pod rejects the eviction request then you will get this error because in that case this pod has to be forced to evict. You can do it with kubectl or just when you try to upgrade your EKS Cluster nodes make sure to select "Force Update" strategy, this will force the pods to evict because the EKS node will go away anyway and will not respect the PodDisruptionBudgets for you. But before doing so, please be aware that this will cause disruption to your workload i.e. a potential downtime.

Edited by: ahmedelgamal on May 7, 2021 2:39 AM

已回答 3 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则