How do I troubleshoot issues with my Amazon EFS volume mounts in Amazon EKS?
I want to troubleshoot errors when mounting Amazon Elastic File System (Amazon EFS) volumes in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
Resolution
You might get one of the following errors in your pods when you mount your Amazon EFS volume in your Amazon EKS cluster:
- "Output: mount.nfs4: mounting fs-18xxxxxx.efs.us-east-1.amazonaws.com:/path-in-dir:/ failed, reason given by server: No such file or directory"
- "Output: Failed to resolve "fs-xxxxxx.efs.us-west-2.amazonaws.com" - check that your file system ID is correct"
- "mount.nfs4: access denied by server while mounting 127.0.0.1:/"
- "mount.nfs: Connection timed out"
- "Unable to attach or mount volumes: timed out waiting for the condition"
Before you begin the troubleshooting steps, verify that you have the following:
- An Amazon EFS file system created with a mount target in each of the worker node subnets.
- A valid EFS storage class (from the GitHub website) definition using the efs.csi.aws.com provisioner.
- A valid PersistentVolumeClaim (PVC) definition and PersistentVolume definition. This isn't needed if you're using dynamic provisioning (from the GitHub website).
- The Amazon EFS CSI driver installed in the cluster.
Verify that the mount targets are configured correctly
Be sure to create the EFS mount targets in each Availability Zone where the EKS nodes are running. For example, if your worker nodes are spread across us-east-1a and us-east-1b, create mount targets in both Availability Zones for the EFS file system that you're trying to mount. If you don't correctly create the mount targets, then the pods that are mounting the EFS file system return an error similar to the following:
Output: Failed to resolve "fs-xxxxxx.efs.us-west-2.amazonaws.com" - check that your file system ID is correct.
Verify that the security group associated with your EFS file system and worker nodes allows NFS traffic
The security group that's associated with your EFS file system must have an inbound rule that allows NFS traffic (port 2049) from the CIDR for your cluster's VPC.
The security group that's associated with your worker nodes where the pods are failing to mount the EFS volume must have an outbound rule that allows NFS traffic (port 2049) to the EFS file system.
If the security group for the EFS mount targets or worker nodes isn't allowing NFS traffic, then the pods mounting the EFS file system return the following errors:
"mount.nfs: Connection timed out"
"Unable to attach or mount volumes: timed out waiting for the condition"
Verify that the subdirectory is created in your EFS file system if you're mounting the pod to a subdirectory
When you add sub paths in persistent volumes, the EFS CSI driver doesn't create the subdirectory path in the EFS file system as part of the mount operation. The directories must be already present for the mount operation to succeed. If the sub path isn't present in the file system, then the pods fail with the following error:
Output: mount.nfs4: mounting fs-18xxxxxx.efs.us-east-1.amazonaws.com:/path-in-dir:/ failed, reason given by server: No such file or directory
Confirm that the cluster's VPC uses the Amazon DNS server
When you use the EFS CSI driver to mount the EFS, the EFS mount helper in the EFS CSI driver requires that the VPC uses the Amazon DNS server for the VPC.
Note: The EFS service’s file system DNS has an AWS architectural limitation. Only the Amazon provided DNS can resolve the EFS service's file system DNS.
Verify the DNS server by logging in to the worker node and running the following command:
nslookup fs-4fxxxxxx.efs.region.amazonaws.com <amazon provided DNS IP> <amazon provided DNS IP = VPC network(10.0.0.0) range plus two>
Note: Replace region with your AWS Region.
If the cluster VPC is using a custom DNS server, then you must configure the custom DNS server to forward all *.amazonaws.com requests to the Amazon DNS server. If these requests aren't forwarded, then the pods fail with an error similar to the following:
Output: Failed to resolve "fs-4 fxxxxxx.efs.us-west-2.amazonaws.com" - check that your file system ID is correct.
Verify that you have "iam" mount options in the persistent volume definition when using a restrictive file system policy
In some cases, the EFS file system policy is configured to restrict mount permissions to specific IAM roles. If this is the case, then the EFS mount helper in the EFS CSI driver requires that the -o iam mount option pass during the mount operation. Include the spec.mountOptions property so that the CSI driver can add the iam mount option (from the GitHub website).
Example PersistentVolume specification:
apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv1 spec: mountOptions: - iam . . . . . .
If you don't add the iam mount option when you use a restrictive file system policy, then the pods fail with an error similar to following:
mount.nfs4: access denied by server while mounting 127.0.0.1:/
Verify that the Amazon EFS CSI driver controller service account is annotated with the correct IAM role and the IAM role has the required permissions
Run the following command to verify that the service account used by the efs-csi-controller pods has the correct annotation:
kubectl describe sa efs-csi-controller-sa -n kube-system
Verify that the following annotation is present:
eks\.amazonaws\.com/role-arn"="arn:aws:iam::111122223333:role/AmazonEKS_EFS_CSI_Driver_Policy
Verify the following:
- The IAM OIDC provider for the cluster was created.
- The IAM role associated with efs-csi-controller-sa service account has the required permissions (from the GitHub website) to perform EFS API calls.
- The IAM role's trust policy trusts the service account efs-csi-controller-sa. The IAM role's trust policy must look like the following:
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa" } } }] }
Verify that the EFS CSI driver pods are running
The EFS CSI driver is made up of controller pods that are run as a deployment and node pods that are run as a DaemonSet. Run the following commands to verify that these pods are running in your cluster:
kubectl get all -l app.kubernetes.io/name=aws-efs-csi-driver -n kube-system
Verify the EFS mount operation from the EC2 worker node where the pod is failing to mount the file system
Log in to the Amazon EKS worker node where the pod is scheduled. Then, use the EFS mount helper to try to manually mount the EFS file system to the worker node. You can run the following command to test:
sudo mount -t -efs -o tls file-system-dns-name efs-mount-point/
If the worker node can mount the file system, then review the efs-plugin logs from the CSI controller and CSI node pods.
Check the EFS CSI driver pod logs
Check the CSI driver pod logs to determine the cause of the mount failures. If the volume is failing to mount, then review the efs-plugin logs. Run the following commands to retrieve the efs-plugin container logs:
kubectl logs deployment/efs-csi-controller -n kube-system -c efs-plugin kubectl logs daemonset/efs-csi-node -n kube-system -c efs-plugin

Relevant content
- asked a month agolg...
- asked a year agolg...
- asked 2 months agolg...
- asked 7 months agolg...
- asked 4 years agolg...
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 2 years ago