Unable to access EKS cluster from EC2 instance, despite being able to access other clusters. "couldn't get current server API group list: the server has asked for the client to provide credentials"

0

Hi,

I have 2 EKS clusters: EKS_accessible and EKS_not_accessible. I am attempting to access both of these from 2 different environments: my local machine, and an EC2 instance.

  • In my local machine, I call aws sts get-caller-identity and let's say I am assuming IAM role local.
  • In the EC2 instance, let's say I am assuming IAM role remote.
  • I have allow-listed both my local machine's public IP and the EC2 instance public IP from both EC2 clusters. As a result, I am able to call the endpoint to the cluster from both machines.

From both my local machine and the EC2 instance, when I run a command like kubectl get pods -A against cluster EKS_accessible I obtain a result without a problem.

However,

  • From my local machine when I run the command kubectl get pods -A against cluster EKS_not_accessible I obtain a result without a problem.
  • From the EC2 instance when I run the command kubectl get pods -A against cluster EKS_not_accessible I get an error similar to the one in this stackoverflow post:
E0804 14:18:49.784346    9986 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0804 14:18:49.785400    9986 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0804 14:18:49.786149    9986 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0804 14:18:49.787951    9986 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0804 14:18:49.789820    9986 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)

I have done extensive research to verify that I have the correct network configurations in route tables, and proper security groups setup to allow access to the cluster EKS_not_accessible from the EC2 instance and I don't believe that the problem is from there. I think this is an RBAC/IAM issue.

I have found the following article to support this claim, and I want to explore this further but I don't know where to go from here. Upon checking CloudWatch logs, I see a similar error as mentioned in the article:

 time="2022-12-26T20:46:48Z" level=warning msg="access denied" client="127.0.0.1:43440" error="sts getCallerIdentity failed: error from AWS (expected 200, got 403). Body: {"Error":{"Code":"InvalidClientTokenId","Message":"The security token included in the request is invalid.","Type":"Sender"},"RequestId":"a9068247-f1ab-47ef-b1b1-cda46a27be0e"}" method=POST path=/authenticate

The article mentions:

If the issue is caused by using the incorrect IAM entity for kubectl, then review the kubectl kubeconfig and AWS CLI configuration. Make sure that you're using the correct IAM entity. For example, suppose that the logs look similar to the following. This output means that the IAM entity used by kubectl can't be validated. Be sure that the IAM entity used by kubectl exists in IAM and the entity's programmatic access is turned on.

Which is why I believe that this is an RBAC/IAM issue. Perhaps it could also be a security group problem, at either the cluster level or the node group level.

How do I solve this problem with the given information? Any help is appreciated, thank you.

EDIT: I just added role remote to the aws-auth configmap, that I referenced in my post (the role assumed within the EC2 instance), and all of a sudden I am able to list the pods and access the cluster from within the EC2 instance.

However, this role is not present in cluster EKS_accessible, so how am I even able to access this cluster from the EC2 instance? Is there some other configuration that you think is there?

1 Answer
0

Is it possible remote is the role you used to create the cluster EKS_accessible? If so it would be the cluster administrator, configured outside of the configmap.

As a side note, if that is the case a best practice would be to remove those cluster-admin privileges from the cluster creator as described in the EKS Best Practices Guide

AWS
answered 8 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions