1 Answer
- Newest
- Most votes
- Most comments
1
There could be many possible reasons which can cause the issue.
- Your EFS file system might have been mounted to a different path within the pod other than the one the application is trying to access.
- The EFS file system ID specified in the PVC or the storage class might be incorrect, leading to mounting to a different EFS (empty) or an incorrect path.
- There could be a misconfiguration in the network or security group settings, which can prevent proper communication between EFS and Kubernetes nodes.
- The pod might not have the necessary IAM permissions to read from the EFS mount. Also, EFS file system policy can lead to limited read and write permission.
As a test, I would suggest you to try to mount the EFS on an EC2 instance and see if you are able to access the files or not. For more information on the steps, kindly refer to this AWS guide.
https://docs.aws.amazon.com/efs/latest/ug/wt1-test.html
I would recommend you to open a support case with AWS EFS team and provide the concerned EFS ID, AWS region and mount command along with the output of log collector script for better assistance.
https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/troubleshooting/README.md
Relevant content
- asked 2 years ago
- asked 5 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated 3 months ago