Complete a 3 Question Survey and Earn a re:Post Badge
Help improve AWS Support Official channel in re:Post and share your experience - complete a quick three-question survey to earn a re:Post badge!
Why can’t I delete my Amazon EKS cluster resources?
I can’t delete a resource from my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
Resolution
Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshooting errors for the AWS CLI. Also, make sure that you're using the most recent AWS CLI version.
Take the following troubleshooting actions based on the Amazon EKS resource that's stuck in the Terminating state. Finalizers alert controllers to delete resources and prevent accidental deletion. If your resource is stuck in Terminating, then you must remove finalizers from it before you delete it. For more information about finalizers, see Finalizers on the Kubernetes website.
Troubleshoot namespace deletion issues
To check whether there are resources in the namespace stuck in the Terminating state, run the following command:
kubectl get all -n namespace-name | grep Terminating
Note: Replace namespace-name with the namespace.
To manually delete the resource, run the following command:
kubectl delete resource resource-name -n namespace-name
Note: Replace resource with the resource type, resource-name with the resource name, and namespace-name with the namespace.
To check for finalizer or API service errors, run the following command:
kubectl get ns namespace-name -o json
Note: Replace namespace-name with the namespace.
If you receive errors in the command output, then see How do I troubleshoot namespaces in a terminated state in my Amazon EKS cluster?
Troubleshoot ingress deletion issues
Delete load balancers or target groups that are associated to the ingress resource.
To remove the finalizers from an ingress resource, run the following command:
kubectl patch ingress ingress-name -n namespace-name -p '{"metadata":{"finalizers":[]}}' --type=merge
Note: Replace ingress-name with the ingress name and namespace-name with the namespace.
Then, run the following command to delete the ingress resource:
kubectl delete ingress ingress-name -n namespace-name
Note: Replace ingress-name with the ingress name and namespace-name with the namespace.
Troubleshoot service deletion issues
To remove the finalizers from the service resource, run the following command:
kubectl patch svc service-name -n namespace-name -p '{"metadata":{"finalizers":[]}}' --type=merge
Note: Replace service-name with your service name and namespace-name with the namespace.
Use the Amazon EC2 console to deactivate Delete protection on the load balancer associated with the service. Or, use the service.beta.kubernetes.io/aws-load-balancer-attributes annotation to deactivate Delete protection. For more information, see Resource attributes on the Kubernetes website.
Then, run the following command to delete the service:
kubectl delete svc service-name -n namespace-name
Note: Replace service-name with your service name and namespace-name with the namespace.
Troubleshoot PV and PVC deletion issues
If you can't delete the PersistentVolume (PV) or the PersistentVolumeClaim (PVC), then check for the following issues:
- Verify if you deleted the PV before you removed the bound PVC
- Check if you removed the PVC when a pod was still running and attached to it
You deleted the PV
To troubleshoot this issue, delete the PVC bound to the PV.
To identify the PVC associated with the PV stuck in the Terminating state, run the following command to describe the PV:
kubectl get pv pv-name
Note: Replace pv-name with the PV name. The command shows the PVC name and namespace in the NAMESPACE/PVC_NAME format.
Example output:
default/ebs-claim
To delete the PVC, run the following command:
kubectl delete pvc -n namespace-name pvc-name
Note: Replace namespace-name with the PVC namespace and pvc-name with the PVC name.
If you still can't delete the PV, then run the following command to remove its finalizers:
kubectl patch pv -p '{"metadata":{"finalizers":null}}' pv-name
Note: Replace pv-name with the PV name.
You removed the PVC
To resolve this issue, delete the pod that's attached to the PVC.
To identify the pods that are associated with the PVC, run the following command to describe the PVC:
kubectl describe pvc -n namespace-name pvc-name
Note: Replace namespace-name with the PVC namespace and pvc-name with the PVC name. In the output, check the Used by attribute.
Example output:
Name: ebs-claim Namespace: default StorageClass: ebs-sc Status: Bound Volume: pvc-3402cc47-c4d7-42c3-8965-f9e1e08f8b95 Labels: <none> Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com volume.kubernetes.io/selected-node: ip-192-168-33-43.ec2.internal volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com Finalizers: [kubernetes.io/pvc-protection] Capacity: 4Gi Access Modes: RWO VolumeMode: Filesystem Used By: ebs-app-59c74d8d45-z65kj Events: <none>
In the preceding example, the ebs-app-59c74d8d45-z65kj pod uses the PVC.
To delete the pod, run the following command:
kubectl delete pod -n namespace-name pod-name
Note: Replace namespace-name with the PVC namespace and pod-name with the pod name.
If you still can't delete the PVC, then run the following command to remove its finalizers:
kubectl patch pvc -p '{"metadata":{"finalizers":null}}' -n namespace-name pvc-name
Note: Replace namespace-name with the PVC namespace and pvc-name with the PVC name.
Troubleshoot pod deletion issues
If you can't delete the pod, then check for the following issues:
- Check if the pod fails to respond to the termination signals
- Verify if the pod has an associated finalizer that hasn't completed
Pod fails to respond to termination signals
Important: The following resolution immediately deletes the pod without confirmation that you terminated the running pod. Make sure that the pod isn't running. If you don't, then the pod might indefinitely run on the cluster.
Pods typically delete after the default grace period of 30 seconds. If your pod doesn't delete after 30 seconds, then run the following command to force delete the pod:
kubectl delete pod --force --grace-period=0 -n namespace-name pod-name
Note: Replace namespace-name with the pod namespace and pod-name with the pod name. You must set the --grace-period flag to 0 to immediately remove the pod.
The pod has an associated finalizer that hasn't completed
To check whether the pod has finalizers, run the following command:
kubectl get pod -o yaml -n namespace-name pod-name
Note: Replace namespace-name with the pod namespace and pod-name with the pod name. In the output, check metadata.finalizers to identify finalizers.
Example output:
apiVersion: v1 kind: Pod metadata: creationTimestamp: "2024-09-16T19:32:01Z" finalizers: - kubernetes labels: app: nginx pod-template-hash: 7c79c4bf97 namespace: default spec: containers: - image: nginx:latest imagePullPolicy: Always name: nginx ports: - containerPort: 80 protocol: TCP
To remove finalizers, run the following command:
kubectl patch pod -p '{"metadata":{"finalizers":null}}' -n namespace-name pod-name
Note: Replace namespace-name with the pod namespace and pod-name with the pod name.
Troubleshoot cluster deletion issues
Typically, cluster deletion issues occur because there are managed node groups attached to the cluster. To troubleshoot this issue, remove the managed node groups. For more information, see Why can't I delete my Amazon EKS cluster?
If a cluster has an associated managed scraper, then you might also encounter deletion issues. In this scenario, you can't remove the virtual private network (VPC) or elastic network interface that the scraper uses.
To identify the scraper ID, run the following list-scrapers AWS CLI command:
aws amp list-scrapers
Then, run the following delete-scraper command to delete the scraper:
aws amp delete-scraper --scraper-id scraper-example
Note: Replace scraper-example with the scraper ID.
Troubleshoot Amazon EKS managed node group deletion issues
If you can't delete the managed node group, then review for the following issues:
- Verify if the resource has a dependent object
- Check if the Amazon EC2 Auto Scaling group Terminate process is suspended
- Check if your node group has health errors
- Verify if the pods are stuck in the nodes
The resource has a dependent object
The dependency issue occurs when a resource that the managed node group created is associated with another resource in the AWS account. Typically, the resource is a security group. To resolve this issue, identify the objects associated with the security group. Then, disassociate the security group from the resource. If you encounter issues, then see Why can't I delete a security group that's attached to my Amazon VPC?
The EC2 Auto Scaling group Terminate process is suspended
To troubleshoot this issue, restart the Terminate process.
Your node group has health errors
For information about the types of health errors that can occur in your managed node group, see Issue.
To troubleshoot health issues, see How do I resolve managed node group errors in an Amazon EKS cluster?
The pods are stuck in the nodes
When you delete the managed node group, you might receive the following error message:
"1 pods are unevictable from node ip-192-168-29-140.ec2.internal"
This issue can occur if a pod in the cluster is stuck in the Terminating state. To troubleshoot this issue, get kubectl access to the cluster, and then run the following command:
kubectl get pod -A
The pods are unevictable error might occur because you misconfigured the PodDisruptionBudget or don't have enough disruptions available to allow the pod to evict. To troubleshoot this issue, see How can I troubleshoot managed node group update issues for Amazon EKS?

Relevant content
- asked 2 years agolg...
- Accepted Answerasked a year agolg...
- asked 2 years agolg...
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago