In this case, you can create an Amazon FSx file system that is closer to the size you expect to get to - 50 TB. As soon as you create the Amazon FSx file system, you can enable Data Deduplication, and set the Deduplication optimization schedule to run aggressively (see our documentation for details on how to set the schedule: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-data-dedup.html). When you then initiate the copy of your existing file content to your Amazon FSx file system, the data will be continuously getting deduplicated, and your overall data set can fit in the smaller file system.
Amazon FSx team
Thank you !
Feature Request per case XXXXXXXXXX - Feature Request for function to back out enabled deduplicationasked 8 months ago
Migrate data from Couchbase to DynamoDB using AWS DMSAccepted Answerasked 18 days ago
Migrate tables with LOBs from RDS MySQL to OpenSearch using DMSasked 4 months ago
Migrate on-prem data( files) to S3 using Snowball and data sync using DataSyncasked 7 months ago
Start & Stop of EC2 instance using tags in Lambda functionAccepted Answerasked 6 months ago
How to set the inividual data source of a pipline resolver function via CLI Amplifyasked 6 months ago
Any reasons not to enable deduplication in FSx for Windows?Accepted Answerasked 2 years ago
Migrate on-pre VDI data to AWS Workspaceasked 4 months ago
Migrate or dump DocumentDB dataasked 3 years ago
Migrate data using windows deduplication function to FSx(windows)asked 2 years ago