S3 for backup of large small files on-premise

0

A customer wants to use S3 as a backup solution for their on-premise files:

  • they have a huge number of small files stored in a NAS (NFS server).
  • These files are their customer profiles (json files) and will be updated from time to time
  • For performance issue, they set noatime on their NAS, which means we dont have the information about when the file is modified.

Is there any way to help customer to backup the files in an efficient way? S3 sync may help, but may brings two questions : NFS server is not aware of file modification date, does S3 sync work ? Does S3 sync generate large number of S3 request (resulting increase of billing )?

asked 4 years ago393 views
1 Answer
0
Accepted Answer

The timestamp atime tells you when the file is last read/accessed. Updating the atime every time a file is read causes a lot of usually-unnecessary IO, so by setting the noatime filesystem mount option you can avoid performance hit. If all you care about is when the file contents change last, then mtime is the timestamp you should be looking at.

Do they have VMware environment on-premises? You may want to take a look at AWS DataSync vs. S3 Sync. There are some advantages over S3 CLI (from our FAQ):

  • AWS DataSync fully automates and accelerates moving large active datasets to AWS, up to 10 times faster than command line tools
  • It is natively integrated with Amazon S3
  • It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, monitoring via the DataSync API and Console, and CloudWatch metrics, events and logs that provide granular visibility into the transfer process
AWS
answered 4 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions