Questions tagged with AWS DataSync

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Copy Data from EFS to EFS in Same Account & Region

Hi Everyone, Testing what should be a simple use case in DataSync. For the moment, just using the console to setup two EFS locations (source and destination) and one task (unscheduled) to copy all data (from task path) from source to destination. Since these two file systems operate in the same region and the same account, I think (hope) I can use a serverless setup that does not require (for example) NFS to EFS with an agent, etc. I have setup only these resources in DataSync: - Location: Source EFS in us-east-1 with correct subnet and SG that has "/" as the mount path - Location: Destination EFS location in us-east-1 with correct subnet and SG that has "/" as the mount path - Task: in us-east-1 that configures source and destination and mostly uses default settings, though has this notable config: Data transfer configuration > Specific files and folders > Include patterns > /home/user/subfolder/ Everything looks good and I am able to manually start the task, which moves through the Task status: *Launching* and then shows *Success*, but I see that only 1 file is transferred (there are 122 files on the source file system that I am attempting to copy). Looking at the destination EC2 instance, I see only a new hidden directory named: `.aws-datasync` created in `/home/user/subfolder/`. I thought perhaps my include pattern was slightly off and so tested these: - /home/user/subfolder/ - /home/user/subfolder - /home/user/subfolder/* - /home/user/subfolder* but the results are always the same: task success but no files (except the hidden dir) transferred ... Any help is much appreciated. Ben
0
answers
0
votes
8
views
bwmills
asked 6 hours ago

AWS DataSync sometimes creates 0 sized S3 objects and doesn't report any error

I am using AWS Datasync to copy/backup data from bucket A (Singapore region) to bucket B (Ohio region). I had done this in steps via multiple sequential back-to-back executions of the DataSync task to make it more manageable for me to monitor. The DataSync task was configured with the option "Log basic information such as transfer errors". One such task execution was run with a particular include filter - say "/folderA*". A few minutes into the execution I realized I wanted to make some changes and therefore cancelled the execution. The execution stopped without any transfer error message in the Cloudwatch log stream. I subsequently started the execution again with the same include filter and this task succeeded. There were no errors in the log stream for this task as well. However, I can now see that some of the files in bucket A weren't transferred correctly. These files show up as 0-sized files in bucket B. Why did this happen? Why were no errors reported during either of the two task executions? How can I rely on AWS Datasync to do the transfer reliably if such issues happen? Note that this was about a 40TB transfer and I found out the issue only because I ran my own sanity checks on the transferred data. PS: Source data is in standard storage class. Destination location is standard storage class, but the destination bucket has a lifecycle rule that transitions incoming data to intelligent tiering immediately
1
answers
0
votes
20
views
asked 10 days ago

AWS DataSync Agent VM kernel corruption issue

DataSync Agent reboots to Grub prompt and has corrupted Kernel file. I have had 3 different Datasync Agents in my VMware cluster reboot unexpectedly and boot to a Grub prompt and report 'you need to load a kernel first' if you run 'boot' from the grub prompt. **Observations:** On all 3 DataSync VMs It's always been the same kernel version file that is corrupted, when you run: `linux /boot/vmlinuz-5.4.217-126.408.amzn2.x86_64 root=/dev/sda1` it returns the error `"error: ../../grub-core/loader/i386/linux.c:696:invalid magic number."` this indicates that the Hex data in the beginning of the file is not correct. ![Invalid magic number](/media/postImages/original/IMnrLMcXt3SIO8ss5Qg9lLgQ) **Resolution:** I have been able to successfully get the VM back online via setting Grub to use an older kernel and initrd image file by running the following commands at the grub prompt: 1. `set root=(hd0,gpt1)` 2. `linux /boot/vmlinuz-5.4.214-120.368.amzn2.x86_64 root=/dev/sda1` (This is telling Grub to use the 214 kernel not the corrupted 217 version) 3. `initrd /boot/initramfs-5.4.214-120.368.amzn2.x86_64.img` 4. `boot` After the machine boots, you have to reboot it to get the network adapter to function (VMware tools doesn't load when you boot from the grub prompt) ***My Questions*** 1. Is Amazon aware of this Kernel issue or has anyone else experienced it 2. Does AWS push updates to the DataSync VMs 3. If DataSync agents are receiving updates, can we opt-out of some or all of the updates or have control over when they are applied? These DataSync agents caused production workflow outages when they rebooted to the Grub prompt unannounced and unexpectedly. 4. Are we allowed to access the full VM terminal so we can work on things like running 'update-grub' to further resolve and investigate these issues?
1
answers
0
votes
80
views
S4TV
asked a month ago
1
answers
0
votes
89
views
asked 2 months ago

AWS Datasync when executing a task and not finding changes to make, counts and charges Requests on an S3 Bucket when performing a verified on the directory / where N files are located ?????

Dear, a task was created in Datasync which transferred 8 files within a directory and subdirectories. My query is when reviewing the logs of the Datasync task, I observe that different requests are generated (created, transferred, verified). When executing a new Datasync task where no transfer was made because no new changes were detected, but when reviewing the task log again, I observed that the request was made on the root (verified directory /). My question is, when Datasync does not find a change to make, it makes N requests on the total number of files and directories that are already updated. Example: if I have 15,000 files updated on a bucket, and when executing a task again that does not make changes, will AWS still charge me and count for listing all the files that have already been transferred? I attach images where I only have 22 objects including directories, files and subdirectories, where 43 tasks were executed, of which 40 did not transfer files (only verified directory /), which, checking my cost manager, made about 1840 Requests (Put, Copy ,Post or List Request) to Amazon S3 ![![![Enter image description here](/media/postImages/original/IMr18LLdAzQ029NZYHaYyj6w) Enter image description here](/media/postImages/original/IM6a6FqBFvR3y9M2cc_47f3g) Enter image description here](/media/postImages/original/IM5rh_GTLaQaq5fKUyCZolkA)
1
answers
0
votes
53
views
asked 3 months ago

Amplify datastore stopped to sync

I am using the aws-amplify API to manipulate the datastore. Everything was going fine and all queries were running successfully until it stopped to work suddenly. In my case i am building an NPM package to wrap aws-amplify functionality with node and typescript. And another developer is using the package to build a native app with react-native. So when i implement new functions i test it locally with ts-node, something like DataStore.query or DataStore.Save ...etc, and the other developer is testing with expo after install the last package release i have done. Once we had a problem saying: ``` [WARN] 04:14.549 DataStore, Object { "cause": Object { "error": Object { "errors": Array [ Object { "message": "Connection failed: {\"errors\":{\"errorType\":\"MaxSubscriptionsReachedError\",\"message\":\"Max number of 100 subscriptions reached\"}}", }, ], }, ``` When it's happened, I tried to run queries locally and it work good with a warning: ``` [WARN] 33:35.743 DataStore - Realtime disabled when in a server-side environment ``` So we thought it is cache problem or something. But now nothing works at all in the dataStore. If i am trying to run code locally with ts-node, the console freeze and never comeback. For example if i do: `await DataStore.query(AccountDetails, "a6603b3e-4ae1-4f6c-9360-bd82fe01dd0d")` the console will freeze with the warning message: ![Enter image description here](https://repost.aws/media/postImages/original/IMdp4AdBO9SAuBC8xX8R_rtg) We tried to fix appSync and subscriptions but it is not working at all. Cognito user pool works good, S3 also good, only datastore is sad :( ``` // How we configure amplify this.awsExports = Amplify.configure({ ...awsConfig }); // How we import Datastore import {DataStore} from "@aws-amplify/datastore"; // Our dependencies "dependencies": { "@aws-amplify/core": "^4.6.0", "@aws-amplify/datastore": "^3.12.4", "@react-native-async-storage/async-storage": "^1.17.4", "@react-native-community/netinfo": "^8.3.0", "@types/amplify": "^1.1.25", "algoliasearch": "^4.14.1", "aws-amplify": "^4.3.29", "aws-amplify-react-native": "^6.0.5", "aws-sdk": "^2.1142.0", "aws-sdk-mock": "^5.7.0", "eslint-plugin-jsdoc": "^39.2.9", "mustache": "^4.2.0" } ``` Please any one can help?
0
answers
0
votes
62
views
asked 4 months ago