mount failed error in eks for db's pods

0

while deploying from the helm chart there are 3 pods working which is stateless, however, there are some other pods unable to mount the volumes to the corresponding Postgres & Redis pod. and used the same chart to deploy in other eks cluster and its working as expected.

error logs is below:

mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-west-2b/vol-0b8a7301623f89c7f --scope -- mount -t ext4 -o defaults /dev/xvdbo /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-west-2b/vol-0b8a7301623f89c7f Output: Running scope as unit run-21761.scope. mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-west-2b/vol-0b8a7301623f89c7f: wrong fs type, bad option, bad superblock on /dev/xvdbo, missing codepage or helper program, or other error. Warning FailedMount 2s kubelet MountVolume.MountDevice failed for volume "pvc-cec820c5-e53a-41c1-9a8b-c8b14218f990" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-west-2b/vol-0b8a7301623f89c7f --scope -- mount -t ext4 -o defaults /dev/xvdbo /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-west-2b/vol-0b8a7301623f89c7f Output: Running scope as unit run-21972.scope.

pod status:

saleor-588b7c95cd-j8k4k 0/1 Running 12 26m saleor-dashboard-78bdcf6ff9-tlfgg 1/1 Running 0 26m saleor-postgresql-0 0/1 ContainerCreating 0 26m saleor-redis-master-0 0/1 ContainerCreating 0 26m saleor-redis-slave-0 0/1 ContainerCreating 0 26m saleor-storefront-84ff9f4967-l9l4m 1/1 Running 0 26m saleor-worker-6b9887fc47-4rtch 1/1 Running 0 26m

asked 2 years ago87 views
No Answers

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions