S3 for backup of large small files on-premise

0

A customer wants to use S3 as a backup solution for their on-premise files:

  • they have a huge number of small files stored in a NAS (NFS server).
  • These files are their customer profiles (json files) and will be updated from time to time
  • For performance issue, they set noatime on their NAS, which means we dont have the information about when the file is modified.

Is there any way to help customer to backup the files in an efficient way? S3 sync may help, but may brings two questions : NFS server is not aware of file modification date, does S3 sync work ? Does S3 sync generate large number of S3 request (resulting increase of billing )?

已提问 4 年前401 查看次数
1 回答
0
已接受的回答

The timestamp atime tells you when the file is last read/accessed. Updating the atime every time a file is read causes a lot of usually-unnecessary IO, so by setting the noatime filesystem mount option you can avoid performance hit. If all you care about is when the file contents change last, then mtime is the timestamp you should be looking at.

Do they have VMware environment on-premises? You may want to take a look at AWS DataSync vs. S3 Sync. There are some advantages over S3 CLI (from our FAQ):

  • AWS DataSync fully automates and accelerates moving large active datasets to AWS, up to 10 times faster than command line tools
  • It is natively integrated with Amazon S3
  • It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, monitoring via the DataSync API and Console, and CloudWatch metrics, events and logs that provide granular visibility into the transfer process
AWS
已回答 4 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则