Anyone using EFS for 20+TB?

0

We are currently utilizing EBS volume that we serve up via NFS from EC2 instances and looking to replace that setup with EFS (and using IA via lifecycles). These volumes are a combination of GP2 (active data) and SC1 (less active data).

Most of the AWS examples in their docs about using EFS reference smaller amounts of data than we have (ie, XXX GB). Clearly EFS is highly scalable well beyond TB...supporting up to PB-scale.

Can anyone share any real experiences using EFS with...

  • Many TB (ie, 20-100) ...my guess is at this scale it might make sense to use multiple EFS volumes
  • Running with at-rest encryption
  • Running snapshot/recovery-point backups to local region every 2 hours (with AWS Backup?)
  • Storing some amount of snapshots/recovery-points out of the region

Thanks!

已提問 4 年前檢視次數 268 次
2 個答案
0
已接受的答案

Hi, thanks for reaching out. I'm on the EFS service team, and can confirm that EFS file systems support PiB-scale, and many customers are using EFS and Backup successfully within and above the size range of 20TiB-100TiB that you mentioned.

Based on your current usage of both EBS GP2 and SC1 it sounds like your use case would be a great candidate for EFS Lifecycle Management, which moves less active data to our EFS Infrequent Access storage class at a 92% lower cost. On average, customers that use this capability achieve blended storage cost of $0.08/GiB-mo or less.

EFS supports both encryption at rest and encryption-in-motion. You can enable the former at file system creation time with a single click, and the latter is a per-connection option when using our EFS mount helper utility.

For backups, you can configure your AWS Backup policy to run as often as every hour by using a custom cron expression in your backup rule configuration. Your first backup will make a fully copy of your file system, and subsequent backups will be incremental-only. You can also configure your backup policy to store some or all backups in another AWS Region.

Last, it's unlikely that you need to create multiple file systems to store your data, since each EFS file system elastically scales to PB-scale, and supports up to 35K IOPS and several GB/s of aggregate throughput.

已回答 4 年前
profile picture
專家
已審閱 1 個月前
0

Hi smaybs,
EFS scales to PB and you can use a policy to move older data to Infrequently Accesses storage class.
Hope that helps.
Stefan Radtke
AWS Solution Architect Professional

Edited by: StefanRadtke on Jul 23, 2020 2:09 AM

Edited by: StefanRadtke on Jul 23, 2020 8:07 AM
Removed reference to Qumulo.

已回答 4 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南