How can I check, scale, delete, or drain my worker nodes in Amazon EKS?

4 minute read
0

I launched my Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes using eksctl or the AWS Management Console. Now I want to check, scale, drain, or delete my worker nodes.

Short description

Complete the steps in the appropriate section based on your needs:

  • Check your worker nodes
  • Scale your worker nodes
  • Drain your worker nodes
  • Delete your worker nodes

Resolution

Check your worker nodes

To list the worker nodes registered to the Amazon EKS control plane, run the following command:

kubectl get nodes -o wide

The output returns the name, Kubernetes version, operating system, and IP address of the worker nodes.

To get additional information on a single worker node, run the following command:

kubectl describe node/node_name

Note: Replace node_name with your value. For example: ip-XX-XX-XX-XX.us-east-1.compute.internal

The output shows more information about the worker node, including labels, taints, system information, and status.

Scale your worker nodes

Note: If your node groups appear in the Amazon EKS console, then use a managed node group. Otherwise, use an unmanaged node group.

(Option 1) To scale your managed or unmanaged worker nodes using eksctl, run the following command:

eksctl scale nodegroup --cluster=clusterName --nodes=desiredCount --name=nodegroupName

Note: Replace clusterName, desiredCount, and nodegroupName with your values.

-or-

(Option 2) To scale your managed worker nodes without eksctl, complete the steps in the "To edit a node group configuration" section of Updating a managed node group.

-or-

(Option 3) To scale your unmanaged worker nodes using AWS CloudFormation, complete the following steps:

1.    Use a CloudFormation template to launch your worker nodes for Windows or Linux.

2.    Modify the NodeAutoScalingGroupDesiredCapacity, NodeAutoScalingGroupMinSize, or NodeAutoScalingGroupMaxSize parameters in your CloudFormation stack.

Drain your worker nodes

Important: The drain action isolates the worker node and tells Kubernetes to stop scheduling any new pods on the node. Pods running on the target node are evicted from draining nodes, which means the pods will be stopped. Consider the effect this can have on your production environment.

You can drain either an entire node group or a single worker node. Choose the appropriate option.

(Option 1) Drain the entire node group:

If you're using eksctl to launch your worker nodes, then run the following command:

eksctl drain nodegroup --cluster=clusterName --name=nodegroupName

Note: Replace clusterName and nodegroupName with your values.

To uncordon the node group, run the following command:

eksctl drain nodegroup --cluster=clusterName --name=nodegroupName --undo

Note: Replace clusterName and nodegroupName with your values.

If you're not using eksctl to launch your worker nodes, then identify and drain all the nodes of a particular Kubernetes version. For example:

#!/bin/bash
K8S_VERSION=1.18.8-eks-7c9bda
nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==\"v$K8S_VERSION\")].metadata.name}")
for node in ${nodes[@]}
do
    echo "Draining $node"
    kubectl drain $node --ignore-daemonsets --delete-local-data
done

To identify and uncordon all the nodes of a particular Kubernetes version, use the following code:

#!/bin/bash
K8S_VERSION=1.18.8-eks-7c9bda
nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==\"v$K8S_VERSION\")].metadata.name}")
for node in ${nodes[@]}
do
    echo "Uncordon $node"
    kubectl uncordon $node
done

Note: To get the version of your worker node, run the following command:

$ kubectl get nodes
NAME                                      STATUS   ROLES    AGE     VERSION
ip-XXX-XXX-XX-XXX.ec2.internal            Ready    <none>   6d4h    v1.18.8-eks-7c9bda
ip-XXX-XXX-XX-XXX.ec2.internal            Ready    <none>   6d4h    v1.18.8-eks-7c9bda

Note: The version number appears in the VERSION column.

(Option 2) Drain a single worker node:

If you're not using eksctl to launch your worker nodes or you want to drain only a specific node, then gracefully isolate your worker node:

kubectl drain node_name --ignore-daemonsets

Note: Replace node_name with your value.

To undo the isolation, run the following command:

kubectl uncordon node_name

Note: Replace node_name with your value.

To migrate your existing applications to a new worker node group, see Migrating to a new node group.

Delete your worker nodes

Important: The delete action is unrecoverable. Consider the impact this can have on your production environment.

If you're using eksctl, then run the following command:

eksctl delete nodegroup --cluster=clusterName --name=nodegroupName

If you have a managed node group, then complete the steps in Deleting a managed node group.

If you have an unmanaged node group and you launched your worker nodes with a CloudFormation template, then delete the CloudFormation stack. You must delete the stack that you created for your node group for Windows or Linux.

If you have an unmanaged node group and didn't use a CloudFormation template to launch your worker nodes, then delete the Auto Scaling group for your worker nodes. Or, terminate the instance directly if you didn't use an Auto Scaling group.


AWS OFFICIAL
AWS OFFICIALUpdated 2 years ago