2 Answers
- Newest
- Most votes
- Most comments
0
Hi fuka,
In this case, you can create an Amazon FSx file system that is closer to the size you expect to get to - 50 TB. As soon as you create the Amazon FSx file system, you can enable Data Deduplication, and set the Deduplication optimization schedule to run aggressively (see our documentation for details on how to set the schedule: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-data-dedup.html). When you then initiate the copy of your existing file content to your Amazon FSx file system, the data will be continuously getting deduplicated, and your overall data set can fit in the smaller file system.
Thank you,
Amazon FSx team
answered 4 years ago
Relevant content
- Accepted Answerasked 4 years ago
- asked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
Bulk data transfers with deduplication enabled is not recommended.
Per the FSx user guide:
"Warning It is not recommended to run certain Robocopy commands with data deduplication because these commands can impact the data integrity of the Chunk Store. For more information, see the Microsoft Data Deduplication interoperability documentation."