- Newest
- Most votes
- Most comments
Hello,
Please try this below commands,
Step 1: SSH into the problematic node and restart Docker
ssh -i /path/to/your-key.pem ec2-user@your-node-ip
sudo systemctl restart docker
Step 2: Drain and terminate the problematic nodes
kubectl get nodes
kubectl drain <node-name> --ignore-daemonsets --delete-local-data
aws ec2 terminate-instances --instance-ids <instance-id>
https://repost.aws/knowledge-center/eks-pod-status-troubleshooting
Hi,
Some people face same problem due to incompatibilities between Kubernetes and recent versions of Docker after its upgrade: see https://github.com/moby/moby/issues/47215
Until docker-ce v24.0.7, kubernetes image components are successfully run by kubelet.
With the upgrade of docker-ce to v25.0, kubelet cannot start up the main components of k8s
with an error stating ID and size unknown
kubelet[21094]: E0125 07:46:51.645034 21094 remote_image.go:94] ImageStatus failed: Id or size
of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set kubelet[21094]: E0125 07:46:51.645064
21094 kuberuntime_image.go:85] ImageStatus for image {"k8s.gcr.io/kube-proxy:v1.17.12"} failed:
Id or size of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set E0125 07:46:51.645109 21094
kuberuntime_manager.go:809] container start failed: ImageInspectError: Failed to inspect image
"k8s.gcr.io/kube-proxy:v1.17.12": Id or size of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set
Error syncing pod 3ed55839-d24d-482a-a2ea-5fa52af9a07a ("kube-proxy-r6kxl_kube-system(
3ed55839-d24d-482a-a2ea-5fa52af9a07a)"), skipping: failed to "StartContainer" for "kube-proxy"
with ImageInspectError: "Failed
to inspect image \"k8s.gcr.io/kube-proxy:v1.17.12\": Id or size of image \"k8s.gcr.io/kube-proxy:v1.17.12\"
is not set"
Maybe you are in a similar case?
I guess that you should check if the build process of your container images has changed recently due to the upgrade of some core component.(Docker or other low-level layer)
Best,
Didier
I'm not using anything with Docker tho... We haven't touch anything in the EKS
Relevant content
- asked a year ago
- asked 3 months ago
- asked 3 years ago
- AWS OFFICIALUpdated a year ago
I'm on it, meanwhile stuck SSH into the nodes. Why is it happening? What is the problem?
the eks ec2 instances don't have public IP
Actually, it happens with Cluster Update issue and also, we need to upgrade the latest ones. So, After update the version you need to update in the Worker node also to create a pod with your Image.
=> Please Mention proper region and Cluster name in the YAML file. => Once you update with proper details and versions it will communicate with "Kube-proxy", "API-Servers". to build your image or pod.
Are You giving Private Subnets during the creation of Node-groups...?
Go and check your Network Interfaces, create a Static Ip with Allocated Interfaces with your EC2 instances and attach it. By go through above steps you will provide Static Public IP For your EKS EC2 instances.
We have 2 machines with the same AMI ami-0c71ddf0753e561aa amazon-eks-node-1.18-v20201126 The first one is in ready state (5 days old) and the second one (12 hours old) in "not ready" state. The ready machine has docker 20 installed, and the not ready has docker 25 installed. How did this happen? it's the same AMI