Questions tagged with AWS DataSync

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

We want to transfer files from our data center to S3 using Datasync. Based on the documentation, apparently, datasync does not delete the file at the source after the data transfer is successful. What is the common practice followed to delete the files locally after they are successfully transferred?
1
answers
0
votes
276
views
asked 4 months ago
I am using AWS Datasync to copy/backup data from bucket A (Singapore region) to bucket B (Ohio region). I had done this in steps via multiple sequential back-to-back executions of the DataSync task to make it more manageable for me to monitor. The DataSync task was configured with the option "Log basic information such as transfer errors". One such task execution was run with a particular include filter - say "/folderA*". A few minutes into the execution I realized I wanted to make some changes and therefore cancelled the execution. The execution stopped without any transfer error message in the Cloudwatch log stream. I subsequently started the execution again with the same include filter and this task succeeded. There were no errors in the log stream for this task as well. However, I can now see that some of the files in bucket A weren't transferred correctly. These files show up as 0-sized files in bucket B. Why did this happen? Why were no errors reported during either of the two task executions? How can I rely on AWS Datasync to do the transfer reliably if such issues happen? Note that this was about a 40TB transfer and I found out the issue only because I ran my own sanity checks on the transferred data. PS: Source data is in standard storage class. Destination location is standard storage class, but the destination bucket has a lifecycle rule that transitions incoming data to intelligent tiering immediately
1
answers
0
votes
47
views
asked 4 months ago
Hello, I am trying to use AWS DataSync to transfer files from one FSx for Lustre setup in the original default VPC to a new VPC with another FSx for Lustre that's already setup. I am copying from "/" on the source to "/" on the destination. When I try to run the task, I get the following error: Task failed to access location loc-sourcelocation:x40017: Mount command timed out. Both FSx for Lustre setups are in the same region. I copied the settings for the 2nd one to match the first one. They both have security groups that have the All traffic rule for all protocols, ports and the source being the security group it's in. They also have rules for ports 988 and 1021-1023 with the source also being the security group. VPC Peering has been setup as well between the two VPC's. I have been looking at the steps here: https://docs.aws.amazon.com/fsx/latest/LustreGuide/migrating-fsx-lustre.html and am not sure what I'm missing. I look in the FSx console and it shows the status for both as available. I've tried looking around on the internet, but have not had any success finding anybody who's done this before. No videos or anything like that. It feels like I'm missing something, but I'm not sure what it is.
1
answers
0
votes
35
views
asked 4 months ago
What is the easiest/simple way to setup sync between ebs volume attached to ec2 instance and s3 ?
1
answers
0
votes
244
views
asked 4 months ago
DataSync Agent reboots to Grub prompt and has corrupted Kernel file. I have had 3 different Datasync Agents in my VMware cluster reboot unexpectedly and boot to a Grub prompt and report 'you need to load a kernel first' if you run 'boot' from the grub prompt. **Observations:** On all 3 DataSync VMs It's always been the same kernel version file that is corrupted, when you run: `linux /boot/vmlinuz-5.4.217-126.408.amzn2.x86_64 root=/dev/sda1` it returns the error `"error: ../../grub-core/loader/i386/linux.c:696:invalid magic number."` this indicates that the Hex data in the beginning of the file is not correct. ![Invalid magic number](/media/postImages/original/IMnrLMcXt3SIO8ss5Qg9lLgQ) **Resolution:** I have been able to successfully get the VM back online via setting Grub to use an older kernel and initrd image file by running the following commands at the grub prompt: 1. `set root=(hd0,gpt1)` 2. `linux /boot/vmlinuz-5.4.214-120.368.amzn2.x86_64 root=/dev/sda1` (This is telling Grub to use the 214 kernel not the corrupted 217 version) 3. `initrd /boot/initramfs-5.4.214-120.368.amzn2.x86_64.img` 4. `boot` After the machine boots, you have to reboot it to get the network adapter to function (VMware tools doesn't load when you boot from the grub prompt) ***My Questions*** 1. Is Amazon aware of this Kernel issue or has anyone else experienced it 2. Does AWS push updates to the DataSync VMs 3. If DataSync agents are receiving updates, can we opt-out of some or all of the updates or have control over when they are applied? These DataSync agents caused production workflow outages when they rebooted to the Grub prompt unannounced and unexpectedly. 4. Are we allowed to access the full VM terminal so we can work on things like running 'update-grub' to further resolve and investigate these issues?
Accepted AnswerAWS DataSync
1
answers
0
votes
104
views
S4TV
asked 5 months ago
Hi guys, I'm trying to copy data from efs that is in us-east-1 to a s3 in us-east-1, so I created a new datasync to do the job. The mount target is the ip address of the EFS filesystem and it's complaining about security group issue. Here is the error: Task failed to access location loc-0426eaa718cb13c76: x40016: Failed to connect to EFS mount target with IP: 10.107.196.128. Please ensure that mount target's security group allows 2049 ingress from the DataSync security group or hosts within the mount target's subnet. The DataSync security group should also allow all egress to the EFS mount target and its security group. I am having hard time deciphering the 2 statements. 1) Please ensure that mount target's security group allows 2049 ingress from the DataSync security group or hosts within the mount target's subnet. Is the mount target the EFS or the EC2 instance that EFS is mounted on? If I am not mistaken, EFS do not have security groups but the EC2 instance does, so is the 1st statement asking me to ensure that EC2 instance security group allow 2049 in bound from DataSync security group? 2) The DataSync security group should also allow all egress to the EFS mount target and its security group. after creating DataSync security group, do I add this new group to the EC2 instance that EFS is mounted to? Thank you.
1
answers
0
votes
140
views
asked 6 months ago
Hi there! Could someone pls help me with the voicemail feature? We set up a call center in SF with Amazon Connect. One of our requirements is the ability for customers to leave a voicemail when no agent is available. We set up all configurations from this doc https://github.com/amazon-connect/amazon-connect-salesforce-scv/tree/master/Solutions/VMX2-VoicemailExpress . But unfortunately VMXSFDCTestFlow-callcenter00d3k0000008pe6 Contact Flow does not create the record in SF.
1
answers
0
votes
194
views
asked 7 months ago
Dear, a task was created in Datasync which transferred 8 files within a directory and subdirectories. My query is when reviewing the logs of the Datasync task, I observe that different requests are generated (created, transferred, verified). When executing a new Datasync task where no transfer was made because no new changes were detected, but when reviewing the task log again, I observed that the request was made on the root (verified directory /). My question is, when Datasync does not find a change to make, it makes N requests on the total number of files and directories that are already updated. Example: if I have 15,000 files updated on a bucket, and when executing a task again that does not make changes, will AWS still charge me and count for listing all the files that have already been transferred? I attach images where I only have 22 objects including directories, files and subdirectories, where 43 tasks were executed, of which 40 did not transfer files (only verified directory /), which, checking my cost manager, made about 1840 Requests (Put, Copy ,Post or List Request) to Amazon S3 ![![![Enter image description here](/media/postImages/original/IMr18LLdAzQ029NZYHaYyj6w) Enter image description here](/media/postImages/original/IM6a6FqBFvR3y9M2cc_47f3g) Enter image description here](/media/postImages/original/IM5rh_GTLaQaq5fKUyCZolkA)
1
answers
0
votes
105
views
asked 7 months ago
Datasync cannot configure scheduled tasks with an interval less than 60 minutes? Any alternative if I want my transferred data sync task to run every 5 minutes. ![Enter image description here](/media/postImages/original/IMogprZSMnToOv1UXUOQyKVA)
1
answers
0
votes
242
views
asked 7 months ago
I am using the aws-amplify API to manipulate the datastore. Everything was going fine and all queries were running successfully until it stopped to work suddenly. In my case i am building an NPM package to wrap aws-amplify functionality with node and typescript. And another developer is using the package to build a native app with react-native. So when i implement new functions i test it locally with ts-node, something like DataStore.query or DataStore.Save ...etc, and the other developer is testing with expo after install the last package release i have done. Once we had a problem saying: ``` [WARN] 04:14.549 DataStore, Object { "cause": Object { "error": Object { "errors": Array [ Object { "message": "Connection failed: {\"errors\":{\"errorType\":\"MaxSubscriptionsReachedError\",\"message\":\"Max number of 100 subscriptions reached\"}}", }, ], }, ``` When it's happened, I tried to run queries locally and it work good with a warning: ``` [WARN] 33:35.743 DataStore - Realtime disabled when in a server-side environment ``` So we thought it is cache problem or something. But now nothing works at all in the dataStore. If i am trying to run code locally with ts-node, the console freeze and never comeback. For example if i do: `await DataStore.query(AccountDetails, "a6603b3e-4ae1-4f6c-9360-bd82fe01dd0d")` the console will freeze with the warning message: ![Enter image description here](https://repost.aws/media/postImages/original/IMdp4AdBO9SAuBC8xX8R_rtg) We tried to fix appSync and subscriptions but it is not working at all. Cognito user pool works good, S3 also good, only datastore is sad :( ``` // How we configure amplify this.awsExports = Amplify.configure({ ...awsConfig }); // How we import Datastore import {DataStore} from "@aws-amplify/datastore"; // Our dependencies "dependencies": { "@aws-amplify/core": "^4.6.0", "@aws-amplify/datastore": "^3.12.4", "@react-native-async-storage/async-storage": "^1.17.4", "@react-native-community/netinfo": "^8.3.0", "@types/amplify": "^1.1.25", "algoliasearch": "^4.14.1", "aws-amplify": "^4.3.29", "aws-amplify-react-native": "^6.0.5", "aws-sdk": "^2.1142.0", "aws-sdk-mock": "^5.7.0", "eslint-plugin-jsdoc": "^39.2.9", "mustache": "^4.2.0" } ``` Please any one can help?
0
answers
0
votes
79
views
asked 8 months ago
Hi, I have been using this ([Build a Cloud Sync Engine](https://docs.microsoft.com/en-us/windows/win32/cfapi/build-a-cloud-file-sync-engine)) documentation to build a Cloud Sync Engine, Is there any possibility to Sync the Data from the AWS as like OneDrive did, We are already have users data in S3 bucket, I'm planning to sync them with this, Like the Folders and Files that are in the user's account, I'm planning to display them in the cloud sync folder like this, ![Enter image description here](https://repost.aws/media/postImages/original/IM7q4z89n4TLSJJutnPFNC1Q) Kindly suggest me a way to integrate the AWS with the cloud sync engine Any suggestions would be helpful! Thanks for your time.
1
answers
0
votes
82
views
asked 8 months ago
Dear Experts, We have windows file server in Data Center with approximately 3.5 TB of Data allocated in 4 disks in DC. Windows file server is on server 2012 in Data center. We are doing migration of this file server to AWS using AWS Migration Services(MGN). Business user is not ready to use Windows FSX as the option here therefore we migrating the file server. After migration, the server will be upgraded to 2019 and then the cutover will be done. Do you forsee any issue with migration with such a large amount of data? For replication disks we are using gp3 disk and replication is happening over the internet so replication process is quite slow. However, the query is that after the replication, do you forsee any issue with upgrade and cutover of the instance? Business is expecting to have all file shares permissions should be replicated to the target. Thanks
1
answers
0
votes
86
views
RiJo
asked 8 months ago