EKS exec liveness and readiness probes

0

EKS "version": "1.21"

Exec liveness and readiness probes does not restart the pod

Name:         clockwork-57d74f6544-dsm8m
Namespace:    sandbox-cl-hc
Priority:     0
Node:         ip-10-0-103-210.eu-west-1.compute.internal/10.0.103.210
Start Time:   Fri, 29 Jul 2022 12:56:32 +0100
Labels:       app=clockwork
              pod-template-hash=57d74f6544
              tier=backend
Annotations:  kubernetes.io/psp: eks.privileged
Status:       Running
Controlled By:  ReplicaSet/clockwork-57d74f6544
Containers:
  clockwork:
    Command:
      bin/clockwork
    Args:
      config/clockwork.rb
    State:          Running
      Started:      Fri, 29 Jul 2022 12:56:34 +0100
    Ready:          True
    Restart Count:  0
    
    Liveness:   exec [find /usr/src/app/tmp/alive] delay=30s timeout=1s period=15s #success=1 #failure=3
    Readiness:  exec [find /usr/src/app/tmp/alive] delay=30s timeout=1s period=15s #success=1 #failure=3

Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  17m                default-scheduler  Successfully assigned sandbox-cl-hc/clockwork-57d74f6544-dsm8m to ip-10-0-103-210.eu-west-1.compute.internal
  Normal   Pulling    17m                kubelet            Pulling image "817894877095.dkr.ecr.eu-west-1.amazonaws.com/ja-rails:5c6b0bb"
  Normal   Pulled     17m                kubelet            Successfully pulled image "817894877095.dkr.ecr.eu-west-1.amazonaws.com/ja-rails:5c6b0bb" in 126.117763ms
  Normal   Created    17m                kubelet            Created container clockwork
  Normal   Started    17m                kubelet            Started container clockwork
  Warning  Unhealthy  16m (x2 over 16m)  kubelet            Liveness probe failed: find: /usr/src/app/tmp/alive: No such file or directory
  Warning  Unhealthy  16m (x2 over 16m)  kubelet            Readiness probe failed: find: /usr/src/app/tmp/alive: No such file or directory

The probes correctly identifies the pod as unhealthy but kubelet is not terminating the pod

restartPolicy is default Always

Does anyone have a clue as to why?

// Alexander

asked 2 years ago379 views
1 Answer
0

Hello,

The readiness probe is not responsible for restarting a pod:

According to the description of your pod, it is ready and running. The events say that the liveness probe failed twice. In the same description, the livness probe has to fail 3 consecutive times to trigger the failure of the liveness probe, which is not the case in the events.

Try to adapt failureThreshold and periodSeconds (Configure Probes) based on your application to get the right behavior you expect from readiness and liveness probes.

Best regards,

AWS
answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions