Questions tagged with AWS DataSync

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I am consistently unable to start a Datasync task to transfer from a local NAS (Synology) to s3. Tried dozen of times with slight changes, NFS versions, squash options, agent reboot, etc... with no success. The tasks all have the unavailable status, with an error message "mount.nfs: access denied by server while mounting..." Setup: - DataSync agent running as a KVM VM on ubuntu 20.04 LTS - DataSync agent connected - NFS share on a Synology NAS on the local network - NFS share permissions provided for the agent IP (and even for the local network) - DataSync agent connectivity and NFS connectivity tests are all successful - mounting the NFS share on ubuntu 20.04 LTS works perfectly Could you please help? What is going on?
2
answers
0
votes
860
views
asked a year ago
I have worked with S3 for a couple of years, and was familiar with the old buckets...and how to share files publicly. However recently when I created a new bucket, I have not been able to share a file to the public the way I had done it in the past. When setting up the bucket and file, I have unchecked the "block public access" however it is not allowing the URL link to work...it says access denied. Can someone please explain what else I need to do in the file/bucket to allow this?
2
answers
0
votes
745
views
asked a year ago
Hi guys, Deployed the datasync agent on-premised (Vmware environment) successfully and did the "Test Network Connectivity" to AWS DataSync which is also successful. I saw forum "https://forums.aws.amazon.com/thread.jspa?threadID=302863&start=25&tstart=0" , but somehow I dont see any answer to fix it as all messages are sent by PM. Can anyone help me on this? I am having issue with DataSync agent, I have deployed agent machine in ESXi environment but and all connectivity tests have been been passed, but somehow whenever I clicked on Get Key, it will try to connect but eventually nothing appears and page cannot be displayed... anyone face same issue. pls help on this as I have Data migration today/tomorrow
2
answers
0
votes
495
views
asked a year ago
Hi, We are planning to ingest approx 2 lakh small pdf files(approx 40 kb) in burst mode from on Premise to Cloud(S3) on daily basis through AWS Datasync. We want to keep track of the files ingested on daily basis and keep updating the processing status as it gets processed. Can someone suggest the best way to achieve it ? We initially thought of using S3 events to call Lambda to insert record in Dynamo DB table but since it may be a burst of files coming into same S3 bucket it might have throttling issue on Dynamo DB or may hit Lambda limits for the Account. Please let me know the best way we can implement this. Thanks in Advance. Regards, Dhaval Mehta
4
answers
1
votes
105
views
asked a year ago
Hello, Where can I find more details on AWS' approach around data models? This would include industry-specific data models AWS is fully invested in.
1
answers
0
votes
51
views
asked a year ago
Dear Team, We have a source system on Premise DC which will upload thousands of PDF and its metadata XML files in Windows NAS. We need to upload the same to S3 and further process it. To process, we need to make sure that the pdf and metadata xml file are both available on S3. I wanted to check if there is a way in AWS Datasync which will make sure that PDF and its corresponding Metadata xml file(both will have the same name but different extensions) both are loaded to S3 togather so that we can start the processing as soon as the files are uploaded to S3. Thank you in advance. Regards, Dhaval Mehta
1
answers
0
votes
65
views
asked a year ago
I am trying to use aws backup to backup efs. In case that the client restore a backup I need to re-send the restoration files. I tried backing up in a new efs and then send to S3 using datasync with no success because the restored efs backup have no mount targets. If I restore in the same efs, how can I access only the backup files to send to S3 and then delete them from the efs?. Is there a better way to achieve this?
2
answers
0
votes
297
views
asked a year ago
I am attempting to deploy a KVM DataSync Agent on my CentOS7 host, and I am stuck on the activation step. After the DataSync agent starts it is supplied with an address 192.168.122.19 I have configured an AWS VPC (CIDR 10.0.0.0/16) and established a site-to-site VPN tunnel to my on-prem private network. I can ping and ssh from my site to an EC2 instance (10.0.1.252) in my VPC. I can ping from an EC2 instance in my VPC back to various machines in my private network (192.168.1.0/24). I have created a datsync Endpoint in my VPC it was assigned 10.0.1.138 address When I run the "Test Network Connectivity" option from within the DataSync Agent console it partially fails 10.0.1.138:443 FAILED 10.0.1.138:1024-1064 10.0.1.138:1026 FAILED 10.0.1.138:1027 FAILED 10.0.1.138:1029 FAILED 54.201.223.107:22 PASSED 0.amazon.pool.ntp.org:123 PASSED 1.amazon.pool.ntp.org:123 PASSED 2.amazon.pool.ntp.org:123 PASSED 3.amazon.pool.ntp.org:123 PASSED Any suggestions on what I might have configured incorrectly? thanks
2
answers
0
votes
113
views
asked a year ago
Hi, This is regarding handling of reserved characters and filenames while migration data to and from S3 to other locations. How are the restricted characters and filenames of the destination locations are handled when we transfer data from Amazon S3? For e.g windows which does not support special characters in file/folder names. So if a object in Amazon S3 key contains these restricted characters what will be the expected filename at the destination location. Are these files/folders skipped or there is some kind of character replacements? Thanks, Arti
1
answers
0
votes
174
views
asked 2 years ago
Hi My Agent is active (SMB to FSx) Location ca-central-1 My task always stay in LAUNCHING state, i cannot find why Any possible policy to add to cloudwatch to get the LOG of mount and creation issue when the task start thanks
2
answers
1
votes
75
views
asked 2 years ago
I've downloaded the latest version of the Hyper-V agent (datasync-20201019-x86_64.vhdx) and am having trouble getting it to boot. It loads for a minute saying ">>Start PXE over IPv4." and then I get this on the screen in the VM connection: ------- Microsoft Hyper-V UEFT Virtual Machine Boot Summary 1. SCSI Disk (0,0) The boot loader did not load an operating system. 2. Network Adapter (00155D01AF02) A boot image was not found. No operating system was loaded. Your virtual machine may be configured incorrectly. Exit and re-configure your VM or click restart to retry the current boot sequence again. ------- I'm deploying this on a Windows 10 Pro with latest updates and with the "Hyper-V Quick Create App". I have the box unchecked for "This VM will run Windows". Physical specs are a Ryzen 7 3700 with 8 cores and 32 GB of RAM with SSDs so I think I'm okay in the hardware area. There's no errors when I extract the 85GB vhdx file and then create the new VM using the file. I've tested turning off the integration services, Enhanced Session Mode, toggled Secure Boot, verified boot order that the hard disk is first, and even completely redownloaded the zip and tried it again. Any thoughts on why this won't work and what I can do to troubleshoot more? Edited by: jn1598 on Jan 20, 2021 6:59 PM
1
answers
0
votes
214
views
jn1598
asked 2 years ago
A customer wants to move its IPV installation (with data) from on-prem to AWS. The storage layer on premise relies on their own object storage solution which has an S3 compatible API (https://docs.ceph.com/en/latest/radosgw/s3/). Now the customer wants to move 70 TB of content to AWS together with the whole IPV suite. While the compute related part has been solved (they have been in contact with IPV and sized accordingly to their needs) the migration of the data is still open. We have discussed both DataSync and SnowBall. DataSync can support [on prem object store as source][1] and keep the metadata intact and it could work for a full migration and eventually scheduled syncs until the cut-over happens, but moving 70TB takes 8-9 days with a dedicated 1Gbit/s connection. I assume he also needs to purchase a Direct Connect too to ensure this expected speed. My preferred option for this customer however would be to use SnowBall for the initial heavy bulky migration and then use DataSync to keep the data in sync later on. Does SnowBall allow to copy the on-prem object store with their metadata and move them to S3? [1]: https://aws.amazon.com/blogs/aws/aws-datasync-adds-support-for-on-premises-object-storage/
1
answers
0
votes
343
views
AWS
asked 2 years ago