EKS Upgrade Failed from 1.13 to 1.14

0

We had a cluster created on version 1.12. We managed to upgrade it to 1.13 successfully. We also upgrade the nodes.
It ran for 2 weeks and today, we decided to upgrade it to 1.14.
The cluster upgrade from 1.13 to 1.14 was triggered from AWS EKS console. It was in 'updating' state for more than an hour before marking it as failed. We checked the errors section, it showed none.

When I check the actual cluster version using kubectl version command, it shows v1.14.9-eks-f459c0.
The AWS console still shows 1.13 and when I try to upgrade it fails. We have coredns, cni, kube-proxy all at expected versions as mentioned in https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html

Any pointers would be very much appreciated as this is a production environment.
Thanks,
Abhishek

已提问 4 年前307 查看次数
1 回答
0

Well, we contacted AWS support. They debugged and got back to us saying it was because the security groups per ENI limit on our account was set to 1. They increased it to 5 and then the upgrade was successful.
None of the parties are sure why the limit was set to 1.

已回答 4 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则