Questions tagged with AWS DataSync
Content language: English
Sort by most recent
Hi Team,
I need to move and sync a folder from FSx storage from AWS Account 1 AWS Regions A, to FSx AWS Account 2 AWS Regions B. How should I set up DataSync for this transfer? If DataSync is not suitable for this task what other AWS services can I use?
I want to query data from dynamoDB using GSI and sort key through Amplify Datastore. Is this possible?
Do VPC Endpoints offer any added security (compared with Public Endpoints) when using AWS DataSync to transfer data from on-premises to AWS storage (e.g. Amazon FSx for Windows File Server?) I believe data transfers between the on-prem DataSync agent and the AWS DataSync services are HTTPS (TLS-encrypted)?
RoboCopy can do a full /L comparison between a Windows UNC source and a AWS FSX destination involving 75 million files that mostly exist at both ends within 131 minutes.
Why does AWS DataSync need 13-14 hours to do the same? Does it do a content comparison, byte-for-byte or checksum? If so, how can we configure DataSync to only do a metadata comparison based on filename, date, size, just like RoboCopy does?
```
-------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
-------------------------------------------------------------------------------
Started : Monday, January 2, 2023 2:01:21 PM
Source : V:\FileServer\FileStore_Prod\NL\Tier3\ADM\
Dest : \\<IP number edited>\share\NL\Tier3\ADM\
Files : *.*
Options : *.* /TS /FP /NDL /L /S /E /DCOPY:DA /COPY:DAT /R:1000000 /W:30
...
------------------------------------------------------------------------------
Total Copied Skipped Mismatch FAILED Extras
Dirs : 57934 281 0 0 0 0
Files : 74729131 599103 74130028 0 0 298
Bytes : 10.706 t 92.138 g 10.616 t 0 0 76.77 m
Times : 2:11:03 0:00:00 0:00:00 2:11:03
Ended : Monday, January 2, 2023 4:12:25 PM
```
On-Premises to S3 DataSync task sometimes fails with "Deferred error: pwrite ... Input/output error"
We run a daily on-premises data upload to s3 over a DirectConnect network. While the daily task has been running for several months now, in the last week we have seen the daily upload task fail twice with the following Execution Status:
```
Transfer and verification completed. Selected files transferred except for files skipped due to errors. If no skipped files are listed in Cloud Watch Logs, please contact AWS Support for further assistance
```
The cloudwatch log had the following error:
```
[ERROR] Deferred error: pwrite(name="<snipped_path>", size=262144, offset=60030976): Input/output error
```
I can't find any mention of this error (and how to fix it) in the documentation, aws re:post OR in a web search. How can I debug the underlying issue?
We need to take backup of an On Premises Server on AWS Cloud. There is 2TB of data on the server. The few services I have looked at are AWS DataSync, AWS Backup and AWS Elastic Disaster Recovery, all these require a VM for the installation of the Recovery Agent. Can someone suggest the most cost effective and straightforward solution. Thank you!
We have 15TB data in on-premises server. We want to backed up the data from on-premises to AWS S3.
Can anyone tell me the steps?
Thanks in Advance.
Hi, we are planning to use DataSync for transferring files from On-prem to AWS S3 on a daily basis
we have done the following
1. Installed DataSync Agent VM On-Prem and have all network connections to NFS server and AWS
2. Created the agent in AWS console and its showing active
Now for our admin to share the required directory from NFS server(Application server IBM iSeries), I am asked to provide the directory details in the agent VM to mount the share, I am not sure about how to create directory in the Datasync agent VM for mounting
Please can you advice
Mounting the destination location, which is an Azure file share is failing with return 2
Task failed to access location loc-0315c941957665151: x40016: mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I'm transferring 1TB of data from google cloud storage to AWS s3. While using datasync service, I'm getting below error:
Failed to read metadata for file /pipeline.yml: S3 Head Object Failed
I have unchecked the copy tagwhile creating task and still getting it.
Hi Everyone,
Testing what should be a simple use case in DataSync.
For the moment, just using the console to setup two EFS locations (source and destination) and one task (unscheduled) to copy all data (from task path) from source to destination. Since these two file systems operate in the same region and the same account, I think (hope) I can use a serverless setup that does not require (for example) NFS to EFS with an agent, etc.
I have setup only these resources in DataSync:
- Location: Source EFS in us-east-1 with correct subnet and SG that has "/" as the mount path
- Location: Destination EFS location in us-east-1 with correct subnet and SG that has "/" as the mount path
- Task: in us-east-1 that configures source and destination and mostly uses default settings, though has this notable config:
Data transfer configuration > Specific files and folders > Include patterns > /home/user/subfolder/
Everything looks good and I am able to manually start the task, which moves through the Task status: *Launching* and then shows *Success*, but I see that only 1 file is transferred (there are 122 files on the source file system that I am attempting to copy). Looking at the destination EC2 instance, I see only a new hidden directory named: `.aws-datasync` created in `/home/user/subfolder/`.
I thought perhaps my include pattern was slightly off and so tested these:
- /home/user/subfolder/
- /home/user/subfolder
- /home/user/subfolder/*
- /home/user/subfolder*
but the results are always the same: task success but no files (except the hidden dir) transferred ...
Any help is much appreciated.
Ben
Hi Guys,
We were transferring TB with datasync, everything was working just fine but one day I wanted to start a new task and the Agent was offline. This is the first time the agent was in that state. We logged into the agents console and tried the "network connectivity test" and everything went wrong. All the tests failed with the same answer "SSL Test Failed". I am including a picture of that. I would be grateful if you could help me
