How do I plan an upgrade strategy for an Amazon EKS cluster?
When I upgrade my Amazon Elastic Kubernetes Service (Amazon EKS) cluster, I want to make sure that I follow best practices.
New versions of Kubernetes technology introduce significant changes to your Amazon EKS cluster. After you upgrade a cluster, you can’t downgrade it. Therefore, for a successful transition to a newer version of Kubernetes, follow the best practices that are outlined in this upgrade plan.
When you upgrade to a newer Kubernetes version, you can migrate to new clusters instead of performing in-place cluster upgrades. In this case, cluster backup and restore tools like VMware’s Velero can help you migrate to a new cluster. For more information, see Velero on GitHub.
To see current and past versions of Kubernetes that are available for Amazon EKS, see the Amazon EKS Kubernetes release calendar.
Preparing for an upgrade
Before you begin your cluster upgrade, note the following requirements:
- Amazon EKS requires up to five free IP addresses from the subnets that you specified when you created your cluster.
- Make sure that the cluster's AWS Identity and Access Management (IAM) role and security group exist in your AWS account.
- If you activate secrets encryption, then make sure that the cluster IAM role has permission to use the AWS Key Management Service (AWS KMS) key.
Review major updates for Amazon EKS and Kubernetes
Review all documented changes for the version that you’re upgrading to, and note any required upgrade steps. Also, note any requirements or procedures that are specific to Amazon EKS managed clusters.
Refer to the following resources for any major updates to Amazon EKS clusters platform versions and Kubernetes versions:
- Updating an Amazon EKS cluster Kubernetes version
- Amazon EKS Kubernetes versions
- Amazon EKS platform versions
For more information on Kubernetes upstream versions and major updates, see the following documentation on the Kubernetes website and GitHub:
Understand the Kubernetes deprecation policy
When an API is upgraded, the earlier API is deprecated and eventually removed. To understand how APIs might be deprecated in a newer version of Kubernetes, read the deprecation policy on the Kubernetes website.
To check whether you use any deprecated API versions in your cluster, use the Kube No Trouble (kubent) on GitHub. If you do use deprecated API versions, then upgrade your workloads before you upgrade your Kubernetes cluster.
To convert Kubernetes manifest files between different API versions, use the kubectl convert plugin. For more information, see Install kubectl convert plugin on the Kubernetes website.
What to expect during a Kubernetes upgrade
When you upgrade your cluster, Amazon EKS launches new API server nodes with the upgraded Kubernetes version to replace the existing nodes. If any of these checks fail, then Amazon EKS reverts the infrastructure deployment, and your cluster remains on the previous Kubernetes version. However, this rollback doesn’t affect any applications that are running, and you can recover any clusters, if needed. During the upgrade process, you might experience minor service interruptions.
Upgrading the control plane and data plane
Upgrading an Amazon EKS cluster requires updating 2 main components: the control plane (master nodes) and the data plane (worker nodes). When you upgrade these components, keep the following consideration in mind.
In-place Amazon EKS cluster upgrades
For in-place upgrades, you can upgrade only to the next highest Kubernetes minor version. If there are multiple versions between your current cluster version and the target version, then you must upgrade to each version sequentially. For each in-place Kubernetes cluster upgrade, you must complete the following tasks:
- Upgrade the cluster control plane.
- Upgrade the nodes in your cluster.
- Update your Kubernetes add-ons and custom controllers, as required.
- Update your Kubernetes manifests, as required.
For more information, see Planning and executing Kubernetes version upgrades in Amazon EKS in Planning Kubernetes upgrades with Amazon EKS.
Blue/green or canary Amazon EKS clusters migration
A blue/green or canary upgrade strategy is more complex, but it allows upgrades with easy rollback capability and no downtime. For a blue/green or canary upgrade, see Blue/green or canary Amazon EKS clusters migration for stateless ArgoCD workloads.
Upgrading Amazon EKS managed node groups
Important: A node’s kubelet can’t be newer than kube-apiserver. Also, it can’t be more than two minor versions earlier than kube-apiserver. For example, suppose that kube-apiserver is at version 1.24. In this case, a kubelet is supported only at versions 1.24, 1.23, and 1.22.
To completely upgrade your managed node groups, follow these steps:
1. Upgrade your Amazon EKS cluster control plane components to the latest version.
2. Update your worker nodes in the managed node group.
Migrating to Amazon EKS managed node groups
If you use self-managed node groups, then you can migrate your workload to Amazon EKS managed node groups with no downtime. For more information, see Seamlessly migrate workloads from EKS self-managed node group to EKS-managed node groups.
Identifying and upgrading downstream dependencies (add-ons)
Clusters often contain many outside products such as ingress controllers, continuous delivery systems, monitoring tools, and other workflows. When you update your Amazon EKS cluster, you must also update your add-ons and third-party tools. Be sure to understand how add-ons work with your cluster and how they’re updated.
Note: It’s a best practice to use managed add-ons instead of self-managed add-ons.
See the following examples of common add-ons and their relevant upgrade documentation:
- Amazon VPC CNI: For the best version of the Amazon VPC CNI add-on to use for each cluster version, see Updating the Amazon VPC CNI plugin for Kubernetes self-managed add-on. Also, see Update the aws-node daemonset to use IAM roles for service accounts in the Amazon EKS best practices guide on GitHub.
- kube-proxy self-managed add-on: Be sure to update to the latest available kube-proxy container image version for each Amazon EKS cluster version. For more information, see Updating the Kubernetes kube-proxy self-managed add-on.
- CoreDNS: Be sure to update to the latest available CoreDNS container image version for each Amazon EKS cluster version. For more information, see Updating the CoreDNS self-managed add-on.
- AWS Load Balancer Controller: Versions 2.4.0 or later of AWS Load Balancer Controller require Kubernetes versions 1.19 or later. For more information, see AWS Load Balancer Controller releases on GitHub. For installation information, Installing the AWS Load Balancer Controller add-on.
- Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver: Versions 1.1.0 or later of the Amazon EBS CSI driver require Kubernetes versions 1.18 or later. For more information, see Amazon EBS CSI driver releases on GitHub. For installation and upgrade information, see Managing the Amazon EBS CSI driver as an Amazon EKS add-on.
- Amazon Elastic File System (Amazon EFS) Container Storage Interface (CSI) driver: Versions 1.3.x or later of the Amazon EFS CSI driver require Kubernetes versions 1.17 or later. For more information, see Amazon EFS CSI driver releases on GitHub. For installation and upgrade information, see Amazon EFS CSI driver.
Upgrading AWS Fargate nodes
To update a Fargate node, delete the pod that the node represents. Then, after you update your control plane, redeploy the pod. Any new pods that you launch on Fargate have a kubelet version that matches your cluster version. Existing Fargate pods aren't changed.
Note: To keep Fargate pods secure, Amazon EKS must periodically patch them. Amazon EKS tries to update the pods in a way that reduces impact. However, if pods can't be successfully evicted, then Amazon EKS deletes them. To minimize disruption, see Fargate pod patching.
Upgrading groupless nodes that are created by Karpenter
When you set a value for ttlSecondsUntilExpired, this activates node expiry. After nodes reach the defined age in seconds, Amazon EKS deletes them. This is true even if they’re in use. This allows you to replace nodes with newly provisioned instances, and therefore upgrade them. When a node is replaced, Karpenter uses the latest Amazon EKS optimized AMIs. For more information, see Deprovisioning on the Karpenter website.
The following example shows a node that’s deprovisioned with ttlSecondsUntilExpired, and therefore replaced with an upgraded instance:
apiVersion: karpenter.sh/v1alpha5 kind: Provisioner metadata: name: default spec: requirements: - key: karpenter.sh/capacity-type # optional, set to on-demand by default, spot if both are listed operator: In values: ["spot"] limits: resources: cpu: 1000 # optional, recommended to limit total provisioned CPUs memory: 1000Gi ttlSecondsAfterEmpty: 30 # optional, but never scales down if not set ttlSecondsUntilExpired: 2592000 # optional, nodes are recycled after 30 days but never expires if not set provider: subnetSelector: karpenter.sh/discovery/CLUSTER_NAME: '*' securityGroupSelector: kubernetes.io/cluster/CLUSTER_NAME: '*'
Note: Karpenter doesn’t automatically add jitter to this value. If you create multiple instances in a short amount of time, then they expire near the same time. To prevent excessive workload disruption, define a pod disruption budget, as shown in Kubernetes documentation.
- damjanlg...asked 2 months agolg...
- matsunanarolg...asked a year agolg...
- Arjun Goellg...asked 3 months agolg...
- rePost-User-1886335lg...asked 2 months agolg...
- Accepted AnswerrePost-User-1830054lg...asked 7 months agolg...
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated a year ago