Lost Node-Pod-Container Access from CLI, Nodes show Unknown Status in Console, EKSClusterRoleLatest missing

0

Overnight, I lost access to my Pods/Containers via my k9s tool. As far as I know, no changes were made. I can still see several resources - nodes, pods, containers - listed in my namespace, but I can no longer access the logs or shell into the containers. In the developer console, the nodes for this node group show unknown status in the EKS Service console.

I tried updating the Kubernetes version in the console and got the following error: Enter image description here

Trying to check out the cluster's IAM Access Role, EKSClusterRoleLatest fails to resolve: Enter image description here Enter image description here

My user appears to be okay in permissions for kubectl commands: ➜ ~ kubectl auth can-i get pods/exec yes ➜ ~ kubectl auth can-i create pods/exec yes

It seems some of the service accounts are having problems.

  • It seems creating my own EKSClusterRoleLatest with the EKS Cluster Role restored access. I believe this role likely is standard/populated by AWS and not typically a user created role. Unsure how that Role was lost and why my Cluster was still pointed at it.

No Answers

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions