EKS - Restarting or Deleting a Pod for a K8s service kills service nodeport on worker group instance

0

I have a simple k8s NodePort Service linked to a k8s deployment of a single pod hosting a hello world go program that basically uses cobra to spin up a fasthttp server. If the pod of that deployment restarts or get deleted (and a new one spins up), the whole service goes down and never comes back up. The pod reports healthy, the service reports healthy, but the load balancer is reporting no response. If I ssh onto the EC2 Node and try to call the nodeport of the service, i also get no response. Basically the entire port just dies and stops responding on the instance. Restarting the node doesn't fix it, deleting the instance and bringing up a new one doesn't fix it. I basically need to move the entire service to an entirely new port for it to start working again.

This with the k8s version of 1.24

Does anyone have any ideas why this might be the case, i've never encountered this issue hosting a container built in any other way.

preguntada hace un año373 visualizaciones
1 Respuesta
0

If you are running service running through a replica set but with only one pod available, the service can become unavailable after deleting the pod.

You may want to deploy a few pods as part of a replica set and see if you still experience the same.

profile pictureAWS
EXPERTO
dariush
respondido hace un año

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas