EKS - Restarting or Deleting a Pod for a K8s service kills service nodeport on worker group instance

0

I have a simple k8s NodePort Service linked to a k8s deployment of a single pod hosting a hello world go program that basically uses cobra to spin up a fasthttp server. If the pod of that deployment restarts or get deleted (and a new one spins up), the whole service goes down and never comes back up. The pod reports healthy, the service reports healthy, but the load balancer is reporting no response. If I ssh onto the EC2 Node and try to call the nodeport of the service, i also get no response. Basically the entire port just dies and stops responding on the instance. Restarting the node doesn't fix it, deleting the instance and bringing up a new one doesn't fix it. I basically need to move the entire service to an entirely new port for it to start working again.

This with the k8s version of 1.24

Does anyone have any ideas why this might be the case, i've never encountered this issue hosting a container built in any other way.

posta un anno fa372 visualizzazioni
1 Risposta
0

If you are running service running through a replica set but with only one pod available, the service can become unavailable after deleting the pod.

You may want to deploy a few pods as part of a replica set and see if you still experience the same.

profile pictureAWS
ESPERTO
dariush
con risposta un anno fa

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande