Unable to delete a kubernetes cluster - 1 pods are unevictable

0

New to k8s and was following the AWS documentation. All was fine until it was time to sunset what I was doing. Hope an expert here can suggest a solution. I successfully deleted the sample namespace but i cannot delete the cluster.

$ eksctl delete cluster --name learner-1 2024-03-04 20:17:18 [ℹ] deleting EKS cluster "learner-1" 2024-03-04 20:17:19 [ℹ] will drain 1 unmanaged nodegroup(s) in cluster "learner-1" 2024-03-04 20:17:19 [ℹ] starting parallel draining, max in-flight of 1 2024-03-04 20:18:23 [!] 1 pods are unevictable from node ip-192-168-0-113.ec2.internal ... repeating...

$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES aws-node-62hqh 2/2 Running 0 5h44m 192.168.52.43 ip-192-168-52-43.ec2.internal <none> <none> aws-node-vctsk 2/2 Running 0 5h44m 192.168.0.113 ip-192-168-0-113.ec2.internal <none> <none> coredns-54d6f577c6-7rxv8 1/1 Running 0 5h50m 192.168.8.153 ip-192-168-0-113.ec2.internal <none> <none> coredns-54d6f577c6-zlqqd 0/1 Pending 0 67m <none> <none> <none> <none> kube-proxy-nbqfl 1/1 Running 0 5h44m 192.168.0.113 ip-192-168-0-113.ec2.internal <none> <none> kube-proxy-s9wdr 1/1 Running 0 5h44m 192.168.52.43 ip-192-168-52-43.ec2.internal <none> <none>

$ kubectl describe pods coredns-54d6f577c6-zlqqd Name: coredns-54d6f577c6-zlqqd Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: coredns Node: <none> Labels: eks.amazonaws.com/component=coredns k8s-app=kube-dns pod-template-hash=54d6f577c6 Annotations: <none> Status: Pending IP:
IPs: <none> Controlled By: ReplicaSet/coredns-54d6f577c6 Containers: coredns: Image: 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/coredns:v1.11.1-eksbuild.4 Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djd55 (ro) Conditions: Type Status PodScheduled False Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false kube-api-access-djd55: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: CriticalAddonsOnly op=Exists node-role.kubernetes.io/control-plane:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Warning FailedScheduling 2m38s (x14 over 67m) default-scheduler 0/2 nodes are available: 2 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

Can't seem to find anything helpful by googling. I launched my cluster on 2 t2.micro instances. Perhaps it has insufficient resources but it didn't complain when I deployed a simple nginx service.

Thanks.

Henry
gefragt vor 2 Monaten679 Aufrufe
1 Antwort
1
Akzeptierte Antwort

Update...

Noticed that eksctl has a --disable-nodegroup-eviction switch. When I added it, the cluster stack was deleted successfully.

eksctl delete cluster --name learner-1 --disable-nodegroup-eviction

Henry
beantwortet vor 2 Monaten
profile picture
EXPERTE
überprüft vor 2 Monaten
profile pictureAWS
EXPERTE
überprüft vor 2 Monaten

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen