EKS Upgrade Failed from 1.13 to 1.14

0

We had a cluster created on version 1.12. We managed to upgrade it to 1.13 successfully. We also upgrade the nodes.
It ran for 2 weeks and today, we decided to upgrade it to 1.14.
The cluster upgrade from 1.13 to 1.14 was triggered from AWS EKS console. It was in 'updating' state for more than an hour before marking it as failed. We checked the errors section, it showed none.

When I check the actual cluster version using kubectl version command, it shows v1.14.9-eks-f459c0.
The AWS console still shows 1.13 and when I try to upgrade it fails. We have coredns, cni, kube-proxy all at expected versions as mentioned in https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html

Any pointers would be very much appreciated as this is a production environment.
Thanks,
Abhishek

질문됨 4년 전312회 조회
1개 답변
0

Well, we contacted AWS support. They debugged and got back to us saying it was because the security groups per ENI limit on our account was set to 1. They increased it to 5 and then the upgrade was successful.
None of the parties are sure why the limit was set to 1.

답변함 4년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠