EKS Managed Node Groups - PodEvicitionFailure

0

I receive an error when attempting to upgrade a managed node group. I don't see anything in the documentation highlighting this specific error.

Currently the error reads:

PodEvictionFailure

Reached max retries while trying to evict pods from node ip-10-50-20-101.us-east-2.compute.internal in node group GrowOps-NodeGroup-1

istio-telemetry-55c8559456-86klx

Thanks for any assistance.

Regards

Edited by: eaf-joseph on Mar 19, 2020 8:36 AM

Edited by: eaf-joseph on Mar 19, 2020 8:37 AM

preguntada hace 4 años4052 visualizaciones
1 Respuesta
1

Hi Joseph, I know it has been a long time already, but in case you still wanted to know the root cause it is because by default Amazon EKS created AutoScaingGroups are using "Rolling Updates" strategy for upgrading the EKS cluster nodes. This is done to make sure that you don't have any interruptions in your workload while the upgrade is being done behind the scene. If you have deployed PodDisruptionBudgets that makes the pod rejects the eviction request then you will get this error because in that case this pod has to be forced to evict. You can do it with kubectl or just when you try to upgrade your EKS Cluster nodes make sure to select "Force Update" strategy, this will force the pods to evict because the EKS node will go away anyway and will not respect the PodDisruptionBudgets for you. But before doing so, please be aware that this will cause disruption to your workload i.e. a potential downtime.

Edited by: ahmedelgamal on May 7, 2021 2:39 AM

respondido hace 3 años

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas