I understand one of the EKS node is scheduled for the maintenance. Currently you have second group that runs the "worker" pods and has 3 nodes. You want to scale in the group to 2 nodes and want the node scheduled for the maintenance should are removed as part of scale in. You have workload already transferred to other 2 node at moment.
Below is list of steps I have tested in my test environment to achieve this.
You might already be aware you have autoscaling group for the group and it takes care of launching/termination of node and keep the node count as per you autoscaling desired capacity.
- Drain on EKS:
$ kubectl get nodes
$ kubectl cordon <node name>
$ kubectl drain <node name> --ignore-daemonsets
To terminate the instance and also decrements the size of the Auto Scaling group.
$ aws autoscaling terminate-instance-in-auto-scaling-group --instance-id <INSTANCE_ID> --should-decrement-desired-capacity --region <REGION>
How to create EKS cluster with dedicated host node groupAccepted Answerasked 10 months ago
EKS Node Group Strategyasked 18 days ago
Delete EKS Node Group failed due to Security Group Dependencyasked 4 months ago
Worker Node group doesn't join the EKS clusterasked 10 days ago
EKS can't scale Managed Node Group from 0asked 5 months ago
EKS issue when adding node group with t4g class instanceasked 9 months ago
EKS Node Group with RIAccepted AnswerEXPERTasked 2 years ago
How to remove a specific node from EKS node group (none managed eksctl)Accepted Answerasked 2 months ago
EKS static IPs for managed node group nodesAccepted Answerasked 2 years ago
EKS Managed Node Groups - PodEvicitionFailureasked 3 years ago