Looking for ways to decrease FSx systems capacity or change to Single-AZ

1

I'm aware AWS FSx does not natively allow users to decrease the storage capacity or change the Availability Zone of a FSx system either via the live drive or from a backup. However, I'd like to explore alternate ways to reduce the size.

Originally, our drive ended up being around 20TB and then was deduplicated to 4TB, but we're forced to still pay for 20TB because of the original size needed. Is there a service we could use that can copy this already deduplicated data from a live FSx drive to a new FSx drive - so we can setup a new drive that's 5TB large? To my knowledge, Robocopy (what we initially used) would not be able to copy deduplicated data. I'm aware of AWS Datasync as well, but I could not find any information if it would retain deduplicated data - but I assume not.

Additionally, to switch from a Multi-AZ to a Single-AZ, is the only viable way to create a new FSx system and copy the files over to it? Is there no way to alter it from a FSx backup?

2 Answers
0

Decrease storage capacity

  1. Take a backup of source file system for safety
  2. Create a new Amazon FSx for Windows File System, 5TB with an FSx throughput capacity of 64 (MBps) which comes with 8GB of RAM. https://docs.aws.amazon.com/fsx/latest/WindowsGuide/performance.html The recommendation from Microsoft is 1GB RAM per 1 TB of data you trying to dedupe.
  3. Setup an aggressive dedupe schedule that runs directly after the data transfer completes.

Optimization schedule

 
# Edit variables below:
$DestRPSEndpoint = "amznfsxxxxxyyyy.mytestdomain.local"
# This Start time is set for 10 seconds from now (ensure your files server time zone is in UTC) 
# Optimization job needs to start first
$StartTime = (Get-Date).AddSeconds(20)
# DurationHours of 8 hour causing the server to cancel the job after 8 hours if the process has not ended.
$DurationHours = 8 

Invoke-Command -Authentication Kerberos -ComputerName ${DestRPSEndpoint} -ConfigurationName FSxRemoteAdmin -ScriptBlock {
New-FSxDedupSchedule -Name CustomOptimization -Type Optimization -Start $Using:StartTime -Days Mon, Tues, Wed, Thurs, Fri, Sat, Sun -Cores 80 -DurationHours $Using:DurationHours -Memory 70 -Priority High 
} 

# Get status
Invoke-Command -Authentication Kerberos -ComputerName ${DestRPSEndpoint} -ConfigurationName FSxRemoteAdmin -ScriptBlock {Get-FSxDedupSchedule -Name CustomOptimization -Type Optimization} 

Garbage Collection

# This Start time is for Garbage job which is what reclaims free space and runs after optimization completed
# Microsoft defaults is 1hour after optimization but it all depends on how quick optimize completes, if it finishes before 35 min then change this value to what works for you. 

$StartTime = (Get-Date).AddSeconds(20)
 Invoke-Command -Authentication Kerberos -ComputerName ${DestRPSEndpoint} -ConfigurationName FSxRemoteAdmin -ScriptBlock {
 New-FSxDedupSchedule -Name "CustomGarbage" -Type GarbageCollection -Start $Using:StartTime -Days Mon, Tues, Wed, Thurs, Fri, Sat, Sun -DurationHours $Using:DurationHours
} 

# Get status
 Invoke-Command -ComputerName ${DestRPSEndpoint} -ConfigurationName FSxRemoteAdmin -ScriptBlock {Get-FSxDedupSchedule -Name CustomGarbage -Type GarbageCollection} 
  1. Use AWS DataSync Service or Robocopy to move the data in small batches of about 100GB per batch, from source to the newly created 5TB

Note Exclude system volume information folder which contains dedupe chunk store data. robocopy example:

/XD '$RECYCLE.BIN' "System Volume Information" 

See robocopy command used in CloudFormation template link [2].

  1. Wait for 100GB data transfer to complete and check the dedupe status to see if it ran and has started deduping this 100GB batch. Wait until dedup garbage collection job finishes, reclaims space, then repeat process for the next 100GB chunk.
  2. Time how long it took to dedup 100GB then make adjustments to the data batch size accordingly (maybe 200GB) if needed.
  3. Recreate the SMB shares and permissions
  4. Create any alias or SPNs
  5. Terminate the 20TB and use the 5TB
  6. The full workflow (excluding dedupe tips) for share creation, SPN, DNS alias can be found in link [1]

Change Availability Zone

Restore from backup, allows you to change subnets which in turn changes AZs for that file system.

Enter image description here

Additionally, to switch from a Multi-AZ to a Single-AZ

You can automate the migration to a smaller file system using link [2]. The title says upgrade singleAZ to multiAZ but the same concept applies, and it can also be used to move between two SingleAZ systems or move from MultiAZ down to SingleAZ. It moves data, CNAME records, SPN\alias and share permissions ACLs, etc..

In the CloudFormation example, I created a DataSync agent (EC2 instance) to cater for all migration scenarios including cross region, cross account migrations but you can change that part of the code to be a normal DataSync source and destination location if needed.

References:

[1] https://docs.aws.amazon.com/fsx/latest/WindowsGuide/migrate-files-fsx.html

[2] https://aws.amazon.com/blogs/modernizing-with-aws/automate-the-upgrade-of-an-amazon-fsx-for-windows-file-server-to-a-multi-az-deployment/

profile pictureAWS
answered 5 months ago
  • This was great - thank you.

    I have some questions about the process:

    For step 3 "Setup an aggressive dedupe schedule that runs every 5min" - How do you setup a deduplication schedule like this? I see the AWS doc that goes over the schedule, but it states only days of the week and a single time of day to run.

    For Step 5 "Wait for 100GB data transfer to complete and check the dedupe status to see if it ran and has started deduping this 100GB batch" - Do you recommend transferring the chunk of data, THEN have dedup run, WAIT until dedup finishes, then repeat with another chunk? - Or can we keep transferring chunks while it dedups?

0

My pleasure, glad I can assist.

"How do you setup a deduplication schedule like this?"

I have updated my answer to include an example command. You can work out the timings and rerun those schedules by editing the start time. To edit an existing schedule use:


Set-FSxDedupSchedule

# Or remove them using 
Remove-FSxDedupSchedule  -Name CustomOptimization

If you decide to remove the schedule instead of editing it, then you can recreate them using different start times.

"For Step 5 "Wait for 100GB data transfer to complete and check the dedupe status to see if it ran and has started deduping this 100GB batch" - Do you recommend transferring the chunk of data, THEN have dedup run, WAIT until dedup finishes, then repeat with another chunk? "

Avoid transferring chunks while dedup is running because it consumes RAM, CPU and IOPS. Data transfer also consumes IOPS, so rather wait for dedup to process the data then move to next chunk.

There is one public success story using the chunk method, where they split the data into chunks and each chunk\folder represented a data sync task. See this blog: How ClearScale overcame data migration hurdles using AWS DataSync

I have done a test in my lab and Amazon FSx for Windows File Server Dedup Optimization ran against 27GB of mixed size data, 128KB, 1GB, and majority 2MB files on 32MB Throughput FSx took 10min 38 seconds

1024MB FSx with 20.8GB of mix data 2MB file avg, dedup optimization process. START: 8:24:54 PM END: 8:35.03 PM Total: 10min 9 seconds

Please could you share your throughput size of FSx, average file size and the time it took for your FSx to run the optimize and garbage collection?

Have a great day further!

profile pictureAWS
answered 5 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions