EKS - Restarting or Deleting a Pod for a K8s service kills service nodeport on worker group instance

0

I have a simple k8s NodePort Service linked to a k8s deployment of a single pod hosting a hello world go program that basically uses cobra to spin up a fasthttp server. If the pod of that deployment restarts or get deleted (and a new one spins up), the whole service goes down and never comes back up. The pod reports healthy, the service reports healthy, but the load balancer is reporting no response. If I ssh onto the EC2 Node and try to call the nodeport of the service, i also get no response. Basically the entire port just dies and stops responding on the instance. Restarting the node doesn't fix it, deleting the instance and bringing up a new one doesn't fix it. I basically need to move the entire service to an entirely new port for it to start working again.

This with the k8s version of 1.24

Does anyone have any ideas why this might be the case, i've never encountered this issue hosting a container built in any other way.

質問済み 1年前372ビュー
1回答
0

If you are running service running through a replica set but with only one pod available, the service can become unavailable after deleting the pod.

You may want to deploy a few pods as part of a replica set and see if you still experience the same.

profile pictureAWS
エキスパート
dariush
回答済み 1年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ