- Newest
- Most votes
- Most comments
Transfers between EFS within the same account are supported without requiring the use of an agent. Both the source and destination locations would be specified as the respective EFS filesystems.
https://docs.aws.amazon.com/datasync/latest/userguide/how-datasync-works.html#transfering-files
If EFS does not contain a file system policy you would not need an IAM role to be able to connect to that file system, you would need a role if a file system policy was in place. However, your task would likely fail with a permission issue if this were the case. https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html#create-efs-location-iam
Can you verify from a client where EFS is attached that the /home/user/subfolder/
directories that are used in your include filter are all located under the EFS mount path /
that us used in the location?
Hi Darryl,
Thanks for your response and for confirming that I don't need an agent and that I should be configuring for EFS (source) to EFS (destination).
To confirm regarding the presence of a policy on the EFS file systems, I don't have a policy (atm) on either the source or the destination EFS file system.
I agree that the mount path (path-related configs) is the place to focus. The reason I setup both locations to use a path of
/
is because when I first attempted to configure the source location with/home/user/subfolder/
, the task fails with:Task failed to access location loc-... Could not mount subdirectory /home/user/subfolder/ on host .... Please verify that the subdirectory exists and is properly exported in /etc/exports
So then I used the config I described in the original post, where both locations use a path of
/
and the task then uses/home/user/subfolder/
as the include pattern.Per your question, I can confirm that
/home/user/subfolder/
is mounted as the file system. When I connect to the EC2 instance that runs the application (vsftpd) that needs to access files stored in EFS,df -h
shows127.0.0.1:/ 8.0E 1.4G 8.0E 1% /home/user/subfolder
On that same EC2 instance, I can also cat
/var/log/amazon/efs/mount.log
2022-11-30 21:39:54 UTC - INFO - version=1.34.1 options={'rw': None, 'tls': None, '_netdev': None} 2022-11-30 21:39:54 UTC - INFO - binding 20820 2022-11-30 21:39:55 UTC - INFO - Starting TLS tunnel: "/usr/bin/stunnel5 /var/run/efs/stunnel-config.fs-0c972a4a62d332b4b.home.user.subfolder.20820" 2022-11-30 21:39:55 UTC - INFO - Started TLS tunnel, pid: 2774 2022-11-30 21:39:55 UTC - INFO - Executing: "/sbin/mount.nfs4 127.0.0.1:/ /home/user/subfolder -o rw,_netdev,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,port=20820" with 15 sec time limit. 2022-11-30 21:39:55 UTC - INFO - Successfully mounted fs-0c972a4a62d332b4b.efs.us-east-1.amazonaws.com at /home/user/subfolder
df -T
shows: Filesystem Type 1K-blocks Used Available Use% Mounted on 127.0.0.1:/ nfs4 9007199254739968 1409024 9007199253330944 1% /home/user/subfolderIt looks like you have attached EFS to
/home/user/subfolder
which is the mount relative to your client. DataSync takes the mount path/
as a directory directly from EFS. Your include filter would be created to match what is in EFS as visible via your attached mount point.ls /home/user/subfolder
Depending on the contents of the EFS filesystem have you attempted to run this task without an include filter?
Can you check the IAM role associated with Datasync and see that it has the right permissions?
Thanks Mike for your response and idea.
I don't have an IAM role associated with DataSync. Please note that I don't have an agent either (i.e. I think that since it's EFS-to-EFS in the same region/account, I can go with serverless and not have to setup an EC2 instance myself that runs the agent - let me know if I am mistaken on this point).
Back to IAM - do I need to create a role for DataSync?
I am not using in-transit encryption.
Do you know (with EFS-to-EFS in same region/account) whether I should be using NFS (source) to EFS (destination)?
Thanks, Ben
Relevant content
- asked 5 years ago
- asked 9 months ago
- asked 3 months ago
- asked 6 years ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated 4 years ago
- AWS OFFICIALUpdated 10 months ago
- AWS OFFICIALUpdated 3 months ago
I'm in a similar situation: I'm transferring within the same account from our s3 bucket to our efs, which is mounted by the ec2 at /efs
The destination path is configured to be /efs/ and I'm getting error:
Task failed to access location loc-yyyyyyyyyy: xyyyyyyyyy: Could not mount subdirectory /efs/ on host 10.0.0.32. Please verify that the subdirectory exists and is properly exported in /etc/exports
The iam role for the data sync task was autogenerated during creation of the task and has all the permissions required at https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html#awsui-tabs-:r4a:-amazon-s3-(source-location)-0
The efs has no file system policy and no access point. The subnet and security group values were copied from the ec2 that mounts the efs. I did not add an inbound rule for port 2049 (NFS) as suggested in https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html because I can't find any guidance about what the source address should be, and it's not mentioned in 3rd party guide https://blog.searce.com/moving-data-between-s3-and-efs-using-datasync-e34fbf620430#fab6
I have not been able to find where to look starting in the EFS console for an IAM policy according to "If you have an Amazon EFS file system that restricts access through an IAM policy..." at https://docs.aws.amazon.com/datasync/latest/userguide/create-efs-location.html
There is no exports file under /etc/ on the ec2 that mounts /efs
Hi, it's likely your instance is mounted to the root
/
of the EFS file system to a folder namedefs
. This is the standard mount command EFS provides as an example to mount the root of an EFS file system. Try to create your DataSync EFS location with a path of/
, which is the root of the file system.