S3 for backup of large small files on-premise

0

A customer wants to use S3 as a backup solution for their on-premise files:

  • they have a huge number of small files stored in a NAS (NFS server).
  • These files are their customer profiles (json files) and will be updated from time to time
  • For performance issue, they set noatime on their NAS, which means we dont have the information about when the file is modified.

Is there any way to help customer to backup the files in an efficient way? S3 sync may help, but may brings two questions : NFS server is not aware of file modification date, does S3 sync work ? Does S3 sync generate large number of S3 request (resulting increase of billing )?

질문됨 4년 전401회 조회
1개 답변
0
수락된 답변

The timestamp atime tells you when the file is last read/accessed. Updating the atime every time a file is read causes a lot of usually-unnecessary IO, so by setting the noatime filesystem mount option you can avoid performance hit. If all you care about is when the file contents change last, then mtime is the timestamp you should be looking at.

Do they have VMware environment on-premises? You may want to take a look at AWS DataSync vs. S3 Sync. There are some advantages over S3 CLI (from our FAQ):

  • AWS DataSync fully automates and accelerates moving large active datasets to AWS, up to 10 times faster than command line tools
  • It is natively integrated with Amazon S3
  • It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, monitoring via the DataSync API and Console, and CloudWatch metrics, events and logs that provide granular visibility into the transfer process
AWS
답변함 4년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠