S3 for backup of large small files on-premise

0

A customer wants to use S3 as a backup solution for their on-premise files:

  • they have a huge number of small files stored in a NAS (NFS server).
  • These files are their customer profiles (json files) and will be updated from time to time
  • For performance issue, they set noatime on their NAS, which means we dont have the information about when the file is modified.

Is there any way to help customer to backup the files in an efficient way? S3 sync may help, but may brings two questions : NFS server is not aware of file modification date, does S3 sync work ? Does S3 sync generate large number of S3 request (resulting increase of billing )?

質問済み 4年前401ビュー
1回答
0
承認された回答

The timestamp atime tells you when the file is last read/accessed. Updating the atime every time a file is read causes a lot of usually-unnecessary IO, so by setting the noatime filesystem mount option you can avoid performance hit. If all you care about is when the file contents change last, then mtime is the timestamp you should be looking at.

Do they have VMware environment on-premises? You may want to take a look at AWS DataSync vs. S3 Sync. There are some advantages over S3 CLI (from our FAQ):

  • AWS DataSync fully automates and accelerates moving large active datasets to AWS, up to 10 times faster than command line tools
  • It is natively integrated with Amazon S3
  • It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, monitoring via the DataSync API and Console, and CloudWatch metrics, events and logs that provide granular visibility into the transfer process
AWS
回答済み 4年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ