1 Answer
- Newest
- Most votes
- Most comments
1
Before starting the migration, scale up the new node groups to ensure they have enough capacity to accommodate all the pods from the old node groups.
If possible, temporarily relax the PDB constraints to allow more disruptions. This can help in speeding up the migration process.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-pdb
spec:
minAvailable: 50%
Use a gradual approach to cordon and drain nodes. This minimizes disruptions by respecting the PDB constraints.
kubectl cordon <old-node-name>
kubectl drain <old-node-name> --ignore-daemonsets --delete-emptydir-data --force
Continuously monitor the status of the pods and ensure they are successfully scheduled on the new nodes. Use kubectl get pods and kubectl describe pod <pod-name> to check pod status and events.
Relevant content
- asked 4 years ago
- AWS OFFICIALUpdated 3 months ago

Thanks for your response, actually I am following the same approach as you mentioned but when pod eviction takes place it creates a pod on new node group simultaneously deleting the pod from old node, so some downtime happens but I need to do this thing with zero downtime for my prod env. As in my case I have a pod running let's say and I have pdb of min available set at the same time, pls let me know what aprroach I can follow in this case.
after creating the new node group, you can scale out your deployment (add replicas), make sure that new replicas were schedules in new nodes, after that perform Drain for old nodes
Ok, Thank you very much, will test this approach