How do I install Karpenter in my Amazon EKS cluster?

6 minute read
0

I want to use Karpenter to scale the worker nodes within my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.

Short description

Karpenter is an open-source node provisioning project built for Kubernetes. Adding Karpenter to a Kubernetes cluster can dramatically improve the efficiency and cost of running workloads on that cluster. For more information, see Karpenter documentation.

The following steps show you how to deploy Karpenter in an Amazon EKS cluster.

Resolution

Prerequisites

Before you begin, complete the following:

  • Install Helm client, 3.11.0 or above. See Helm Docs for more information on the installation procedures.
  • Install eksctl. See the eksctl user guide for more information on installation procedures.
  • Create these environment variables:
export CLUSTER_NAME=your_cluster_name
    
export KARPENTER_VERSION=your_required_version
    
export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.endpoint" --output text)"
    
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)

Note: Enter your EKS cluster name for your_cluster_name and your required Karpenter version number for your_required_version. Check karpenter/releases for Karpenter versions.

Create IAM roles for Karpenter and the Karpenter controller

1.    Create the AWS Identity and Access Management (IAM) roles for the nodes provisioned with Karpenter. The Karpenter node role ( KarpenterInstanceNodeRole) is similar to the Amazon EKS node IAM role. See Amazon EKS node IAM role to create the KapenterInstanceNodeRole using the AWS Management Console or the AWS Command Line Interface (AWS CLI).

Note: If you get errors running the CLI commands, make sure you are using the most recent version of AWS CLI. See Troubleshooting AWS CLI errors - AWS Command Line Interface.

2.    Add these IAM policies to the IAM KarpenterInstanceNodeRole you created.

AmazonEKSWorkerNodePolicy
AmazonEKS_CNI_Policy
AmazonEC2ContainerRegistryReadOnly
AmazonSSMManagedInstanceCore

Configure the IAM role for the Karpenter controller

Create an IAM role for KarpenterControllerRole. The Karpenter controller uses the IAM roles for Service Accounts (IRSA).

1.    Create a controller-policy.json document with the following permissions:

echo '{
    "Statement": [
        {
            "Action": [
                "ssm:GetParameter",
                "iam:PassRole",
                "ec2:DescribeImages",
                "ec2:RunInstances",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeLaunchTemplates",
                "ec2:DescribeInstances",
                "ec2:DescribeInstanceTypes",
                "ec2:DescribeInstanceTypeOfferings",
                "ec2:DescribeAvailabilityZones",
                "ec2:DeleteLaunchTemplate",
                "ec2:CreateTags",
                "ec2:CreateLaunchTemplate",
                "ec2:CreateFleet",
                "ec2:DescribeSpotPriceHistory",
                "pricing:GetProducts"
            ],
            "Effect": "Allow",
            "Resource": "*",
            "Sid": "Karpenter"
        },
        {
            "Action": "ec2:TerminateInstances",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/Name": "*karpenter*"
                }
            },
            "Effect": "Allow",
            "Resource": "*",
            "Sid": "ConditionalEC2Termination"
        }
    ],
    "Version": "2012-10-17"
}' > controller-policy.json

2.    Create an IAM policy using this controller-policy.json document.

aws iam create-policy --policy-name KarpenterControllerPolicy-${CLUSTER_NAME} --policy-document file://controller-policy.json

3.    Create an IAM OIDC identity provider for your cluster using this eksctl command

eksctl utils associate-iam-oidc-provider --cluster ${CLUSTER_NAME} --approve

Note: Make sure that your eksctl version is 0.32.0 or later.

4.    Create the IAM role for Karpenter Controller using the eksctl command. Associate the Kubernetes Service Account and the IAM role using IRSA.

eksctl create iamserviceaccount \
  --cluster "${CLUSTER_NAME}" --name karpenter --namespace karpenter \
  --role-name "$KarpenterControllerRole-${CLUSTER_NAME}" \
  --attach-policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}" \
  --role-only \
  --approve

Add tags to subnets and security groups

1.    Add tags to node group subnets so that Karpenter knows the subnets to use.

for NODEGROUP in $(aws eks list-nodegroups --cluster-name ${CLUSTER_NAME} \
    --query 'nodegroups' --output text); do aws ec2 create-tags \
        --tags "Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}" \
        --resources $(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
        --nodegroup-name $NODEGROUP --query 'nodegroup.subnets' --output text )
done

2.    Add tags to security groups.

Note: The following commands add tags only to the security groups of the first node group. If you have multiple node groups or multiple security groups, you must decide the security group that Karpenter will use.

NODEGROUP=$(aws eks list-nodegroups --cluster-name ${CLUSTER_NAME} \
    --query 'nodegroups[0]' --output text)
 
LAUNCH_TEMPLATE=$(aws eks describe-nodegroup --cluster-name ${CLUSTER_NAME} \
    --nodegroup-name ${NODEGROUP} --query 'nodegroup.launchTemplate.{id:id,version:version}' \
    --output text | tr -s "\t" ",")
 
# If your EKS setup is configured to use only Cluster security group, then please execute -
 
SECURITY_GROUPS=$(aws eks describe-cluster \
    --name ${CLUSTER_NAME} --query cluster.resourcesVpcConfig.clusterSecurityGroupId | tr -d '"')
 
# If your setup uses the security groups in the Launch template of a managed node group, then :
 
SECURITY_GROUPS=$(aws ec2 describe-launch-template-versions \
    --launch-template-id ${LAUNCH_TEMPLATE%,*} --versions ${LAUNCH_TEMPLATE#*,} \
    --query 'LaunchTemplateVersions[0].LaunchTemplateData.[NetworkInterfaces[0].Groups||SecurityGroupIds]' \
    --output text)
 
aws ec2 create-tags \
    --tags "Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}" \
    --resources ${SECURITY_GROUPS}

Update aws-auth ConfigMap

1.    Update the aws-auth ConfigMap in the cluster to allow the nodes that use the KarpenterInstanceNodeRole IAM role to join the cluster. Run the following command:

kubectl edit configmap aws-auth -n kube-system

2.    Add a section to mapRoles that looks similar to this example:

Note: Replace the ${AWS_ACCOUNT_ID} variable with your account, but don't replace {{EC2PrivateDNSName}}.

- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterInstanceNodeRole
  username: system:node:{{EC2PrivateDNSName}}

The full aws-auth ConfigMap now has two groups: one for your Karpenter node role and one for your existing node group.

Deploy Karpenter

1.    Verify the version of the Karpenter release that you want to deploy using this command:

echo $KARPENTER_VERSION

If you don't see any output or see a different version than desired, then run:

export KARPENTER_VERSION=your_required_version

Note: Replace your_required_version with the desired version number in this example. See aws/karpenter for the Karpenter versions on the GitHub website.

2.    Generate a full Karpenter deployment yaml file from the Helm chart. Before you begin, make sure that the Helm client version is v3.11.0 or later.

helm template karpenter oci://public.ecr.aws/karpenter/karpenter --version ${KARPENTER_VERSION} --namespace karpenter \
    --set settings.aws.defaultInstanceProfile=KarpenterInstanceProfile \
    --set settings.aws.clusterEndpoint="${CLUSTER_ENDPOINT}" \
    --set settings.aws.clusterName=${CLUSTER_NAME} \
    --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterControllerRole-${CLUSTER_NAME}" > karpenter.yaml

3.    Set the affinity so that Karpenter runs on one of the existing node group nodes. Find the deployment affinity rule, and then modify it in the karpenter.yaml file that you just created:

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: karpenter.sh/provisioner-name
          operator: DoesNotExist
      - matchExpressions:
        - key: eks.amazonaws.com/nodegroup
          operator: In
          values:
          - ${NODEGROUP}

Create the Karpenter namespace

Create the required Karpenter Namespace and the provisioner CRD. Then, deploy the rest of Karpenter's resources.

kubectl create namespace karpenter
kubectl create -f https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.sh_provisioners.yaml
kubectl create -f https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml
kubectl apply -f karpenter.yaml

Create a default Provisioner

Create a default Provisioner so that Karpenter knows the types of nodes that you want for unscheduled workloads. For more information on specific examples, see aws/karpenter on the GitHub website.

cat <<EOF | kubectl apply -f -
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  requirements:
    - key: karpenter.k8s.aws/instance-category
      operator: In
      values: [c, m, r]
    - key: karpenter.k8s.aws/instance-generation
      operator: Gt
      values: ["2"]
  providerRef:
    name: default
  ttlSecondsAfterEmpty: 30
---
apiVersion: karpenter.k8s.aws/v1alpha1
kind: AWSNodeTemplate
metadata:
  name: default
spec:
  subnetSelector:
    karpenter.sh/discovery: "${CLUSTER_NAME}"
  securityGroupSelector:
    karpenter.sh/discovery: "${CLUSTER_NAME}"
EOF

Scale and verify Karpenter

Use the following steps to scale your node group to a minimum size of at least two nodes to support Karpenter and other critical services.

1.    Configure scaling:

aws eks update-nodegroup-config --cluster-name ${CLUSTER_NAME} \
    --nodegroup-name ${NODEGROUP} \
    --scaling-config "minSize=2,maxSize=2,desiredSize=2"

2.    Scale your workloads, and then verify that Karpenter is creating the new nodes to provision your workloads:

kubectl logs -f -n karpenter -c controller -l app.kubernetes.io/name=karpenter

Note: If you notice any webhook.DefaultingWebhook Reconcile error in the controller logs, restart your Karpenter pods to fix it.

3.    Run the following command to check the status of the nodes:

kubectl get nodes

AWS OFFICIAL
AWS OFFICIALUpdated 4 months ago
8 Comments

I think there is a typo error in section: Configure the IAM role for the Karpenter controller in point 4:

--role-name "$KarpenterControllerRole-${CLUSTER_NAME}" 

should be

--role-name "KarpenterControllerRole-${CLUSTER_NAME}"
jm
replied 2 months ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 2 months ago

When creating an IAM role using the console, by default, the instance-profile is the same as the role name. So in 'Deploy Karpenter' under Step 2, shouldn't the helm template command use:

--set settings.aws.defaultInstanceProfile=KarpenterInstanceNodeRole
instead of --set settings.aws.defaultInstanceProfile=KarpenterInstanceProfile \

replied 2 months ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 2 months ago

In the "Configure the IAM role for the Karpenter controller" section, when creating the IAM role using the below command, remove the $from the --role-name, since it is not set.

Change from this

eksctl create iamserviceaccount \
  --cluster "${CLUSTER_NAME}" --name karpenter --namespace karpenter \
  --role-name "$KarpenterControllerRole-${CLUSTER_NAME}" \
  --attach-policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}" \
  --role-only \
  --approve

TO

eksctl create iamserviceaccount \
  --cluster "${CLUSTER_NAME}" --name karpenter --namespace karpenter \
  --role-name "KarpenterControllerRole-${CLUSTER_NAME}" \
  --attach-policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}" \
  --role-only \
  --approve
AWS
replied 2 months ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 2 months ago

You also need to create instance-profile and bind instance-profile with role-name:

aws iam create-instance-profile --instance-profile-name KarpenterInstanceProfile
aws iam add-role-to-instance-profile --instance-profile-name KarpenterInstanceProfile --role-name KarpenterInstanceNodeRole
aws iam get-instance-profile --instance-profile-name KarpenterInstanceProfile
igoz
replied a month ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied a month ago