EKS - Restarting or Deleting a Pod for a K8s service kills service nodeport on worker group instance

0

I have a simple k8s NodePort Service linked to a k8s deployment of a single pod hosting a hello world go program that basically uses cobra to spin up a fasthttp server. If the pod of that deployment restarts or get deleted (and a new one spins up), the whole service goes down and never comes back up. The pod reports healthy, the service reports healthy, but the load balancer is reporting no response. If I ssh onto the EC2 Node and try to call the nodeport of the service, i also get no response. Basically the entire port just dies and stops responding on the instance. Restarting the node doesn't fix it, deleting the instance and bringing up a new one doesn't fix it. I basically need to move the entire service to an entirely new port for it to start working again.

This with the k8s version of 1.24

Does anyone have any ideas why this might be the case, i've never encountered this issue hosting a container built in any other way.

질문됨 일 년 전373회 조회
1개 답변
0

If you are running service running through a replica set but with only one pod available, the service can become unavailable after deleting the pod.

You may want to deploy a few pods as part of a replica set and see if you still experience the same.

profile pictureAWS
전문가
dariush
답변함 일 년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠