Anyone using EFS for 20+TB?

0

We are currently utilizing EBS volume that we serve up via NFS from EC2 instances and looking to replace that setup with EFS (and using IA via lifecycles). These volumes are a combination of GP2 (active data) and SC1 (less active data).

Most of the AWS examples in their docs about using EFS reference smaller amounts of data than we have (ie, XXX GB). Clearly EFS is highly scalable well beyond TB...supporting up to PB-scale.

Can anyone share any real experiences using EFS with...

  • Many TB (ie, 20-100) ...my guess is at this scale it might make sense to use multiple EFS volumes
  • Running with at-rest encryption
  • Running snapshot/recovery-point backups to local region every 2 hours (with AWS Backup?)
  • Storing some amount of snapshots/recovery-points out of the region

Thanks!

asked 4 years ago296 views
2 Answers
0
Accepted Answer

Hi, thanks for reaching out. I'm on the EFS service team, and can confirm that EFS file systems support PiB-scale, and many customers are using EFS and Backup successfully within and above the size range of 20TiB-100TiB that you mentioned.

Based on your current usage of both EBS GP2 and SC1 it sounds like your use case would be a great candidate for EFS Lifecycle Management, which moves less active data to our EFS Infrequent Access storage class at a 92% lower cost. On average, customers that use this capability achieve blended storage cost of $0.08/GiB-mo or less.

EFS supports both encryption at rest and encryption-in-motion. You can enable the former at file system creation time with a single click, and the latter is a per-connection option when using our EFS mount helper utility.

For backups, you can configure your AWS Backup policy to run as often as every hour by using a custom cron expression in your backup rule configuration. Your first backup will make a fully copy of your file system, and subsequent backups will be incremental-only. You can also configure your backup policy to store some or all backups in another AWS Region.

Last, it's unlikely that you need to create multiple file systems to store your data, since each EFS file system elastically scales to PB-scale, and supports up to 35K IOPS and several GB/s of aggregate throughput.

answered 4 years ago
profile picture
EXPERT
reviewed 3 months ago
0

Hi smaybs,
EFS scales to PB and you can use a policy to move older data to Infrequently Accesses storage class.
Hope that helps.
Stefan Radtke
AWS Solution Architect Professional

Edited by: StefanRadtke on Jul 23, 2020 2:09 AM

Edited by: StefanRadtke on Jul 23, 2020 8:07 AM
Removed reference to Qumulo.

answered 4 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions