Configuring EKS Cross-Account Access Entries using IAM IdentityCenter (IDC)

0

I've been using this guide provided here on re:Post to configure access to my EKS cluster.

I have the following account structure:

  • mycompany-management (management/billing account)
    • mycompany-development
    • mycompany-production

In the management account, which is where IAM IDC is configured, I have IAM IDC (IdentityStore) users which belong to IdentityStore groups, which assign IAM IDC permission sets to users via their groups. At the moment, this works exactly as expected. I have a group global-infra-admin, and a user naftuli.kay (me) which is in that group, and a permission set infra-admin, which assigns this permission set to members of this group to all of my AWS accounts. Again, this is working exactly as expected. I have created the resources in Terraform and everything that I am doing exists as code in Terraform.

I have an EKS cluster in my VPC, again managed via Terraform, and all indications are that it is healthy and working as expected. Since all of my infrastructure is managed by Terraform, there is an IAM role that Terraform assumes in order to provision resources. The access config's authentication mode is API_AND_CONFIG_MAP. At this point, I cannot see inside of my cluster or authenticate into it because there is no correlation between my user and the Terraform IAM role used to create the cluster.

To grant access into the cluster, I'm following the guide linked above and creating a Terraform aws_eks_access_entry to bind an SSO identity into the cluster.

In the production account, when I log in and get the caller identity, I get an ARN looking like this:

arn:aws:sts::123456789012:assumed-role/AWSReservedSSO_infra-admin_deadbeefcafe0123/naftuli.kay

I have transformed this into the format that is specified in the linked post:

arn:aws:iam::123456789012:role/AWSReservedSSO_infra-admin_deadbeefcafe0123

My Terraform looks like this to simply bind this principal to the system:masters group:

resource aws_eks_access_entry default {
  cluster_name = aws_eks_cluster.default.name
  principal_arn = var.principal_arn
  kubernetes_groups = ["system:masters"]
  type = "STANDARD"
}

When I run this, I get an error that the group name cannot begin with system: and the docs suggest this is not allowed so I suppose this error is expected.

I want to be able to grant this IAM principal access to full administrative rights within the EKS cluster. How can I go about granting this level of access using the aws_eks_access_entry API/Terraform resource? If there is another Kubernetes group which grants this level of access, what is it named? I cannot see anything inside of my EKS cluster due to my user not being the same principal that created the cluster.

1 Answer
0

After a ton of experimentation and search crawling, I was able to arrive at the solution, though it absolutely wasn't clear from the documentation or the linked re:Post doc.

Available EKS IAM Access Policies

This doc within the EKS documentation gives a list of possible EKS policies, you'll need these ARNs depending on what you want to do:

  • arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy: admin permissions but not cluster admin
  • arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy: essentially wildcard permissions on the entire cluster
  • arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy: view-only permissions for the entire cluster
  • arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy: edit most Kubernetes resources
  • arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy: view most Kubernetes resources

Additionally, you can use these access policies across the entire cluster or within certain namespaces. Namespaces can include wildcards, so if you want to grant access to every workspace starting with dev-, use dev-*.

Creating an ARN for IAM IdentityCenter (IDC) Cross-Account Access

This shouldn't matter whether you are accessing things across accounts or within the same account, but there is one notable detail: if you are cross-account, when creating your ARN, use the account that you are currently in. If your IAM IDC is configured in an account named management but your cluster is in an account named production, use the account id from production and not the account id from management.

Now, we need to form our ARN to be able to use it in our configuration.

We'll do this with a Terraform code example:

locals {
  # set this to the account id you are currently in
  current_account_id = "0123456789"
  # set this to the region in which IAM IdentityCenter is configured in the source account
  source_idc_region = "us-west-2"
  # set this to the name of the permission set
  perm_set_name = "infra-admin"
  # get the ARN of your permission set in the account where it is defined. this should
  # end in `ps-abcdef01234`. remove the `ps-` prefix, and keep the hexadecimal
  # slug
  perm_set_slug = "abcdef01234"

  # now, form the ARN
  formatted_arn = "arn:aws:iam::${local.current_account_id}:role/aws-reserved/sso.amazonaws.com/${local.source_idc_region}/AWSReservedSSO_${local.perm_set_name}_${local.perm_set_slug}"
}

Creating the Access Entry and the Policy Association

An aws_eks_access_entry represents "authentication" (identifying who a user is), and an aws_eks_access_policy_association represents "authorization" (identifying what a user can do), so we need both of these to grant access. An aws_eks_access_entry without an aws_eks_access_policy_association grants a user the ability to connect, but gives them no permissions whatsoever to do anything within the cluster.

Figuring out the principal_arn (principalArn in the error messages) was very difficult, you need that long ARN we just created to be used in both places.

My intention is to grant wildcard cluster admin permissions to any user utilizing this permission set, so I will be using arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy as the policy, and specifying access_scope.type = "cluster" to make these cluster-wide permissions.

Here we go:

resource aws_eks_access_entry admin {
  type = "STANDARD"
  cluster_name = aws_eks_cluster.default.name
  principal_arn = local.formatted_arn
}

resource aws_eks_access_policy_association admin {
  cluster_name = aws_eks_cluster.default.name
  policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
  principal_arn = local.formatted_arn
  
  access_scope {
    type = "cluster"
  }
  # force the creation of the entry before the creation of the policy
  depends_on = [aws_eks_access_entry.admin]
}

You can also add kubernetes_groups to aws_eks_access_entry to put the principal in certain groups when they access the cluster, and you can change the access_scope to be of type namespace and specify namespaces/namespace wildcards if you want finer-grained permissions than simply going cluster-wide.

The big challenges that I faced here were:

  1. the format of the principal ARN for an AWS IAM IdentityCenter permission set, especially the long form version and which account id to use
  2. that the principal_arn/principalArn should be the same value for both resources. I thought for a while this would be the access entry's ARN, but this is not the case: both resources are identified using the principal ARN and this value must be the same in both resources.

Hopefully this helps someone in the future.

answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions