By using AWS re:Post, you agree to the Terms of Use

EFS performance/cost optimization


We have a relatively small EFS of about 20G in burst mode, it was setup about 2 months ago and there were not much performance issue, utilization are always under 2% even under our max load usage (only for a very short period of time)

And yesterday, we suddenly noticed that our site are not responding, but our server have very minimal CPU loads. Then we saw that the utilization of the EFS suddenly went up to 100%, digging deeper, it seems that we had been slowing and consistently consuming the original 2.3T BurstCreditBalance for the past few weeks, and it went to zero yesterday.


  1. The EFS monitoring tab provided completely useless information and does NOT even include the report of BurstCreditBalance, we had to find it in CloudWatch ourselves.
  2. The Throughput utilization is misleading that we are actually slowly using up the credits, but there are no indications of such
  3. We had since switched to Provisioned mode at 10MBps in the meantime as we're not really sure how to get the correct throughput number we need for our system. CloudWatch is showing 1s average max value of MeteredIOBytes 7.3k, DataReadIOBytes 770k, DataWriteIOBytes 780k.
  4. we're seeing BurstCreditBalance build up much quicker (w 10MBps Provisioned) than we had used previously (in Burst). However, when we switched to 2MBps Provisioned, our system is visibly throttled even though there are 1T BurstCreditBalance, why?

Main questions

  1. How to properly define a Provisioned rate that is not too excessive, but not limiting our system when it needs to use it based on the CloudWatch metrics?
  2. Ideally, we'd like to use Burst as that fits better, but with just 20GB, we don't seem to accumulate any BurstCreditBalance
1 Answer


EFS Monitoring tab can help you look into those metrics, there is a settings icon above the charts on the right side, which then you can add Permitted throughput & Burst Credit balance. If those are critical metrics for you, I'd recommend adding AWS CloudWatch alarms ( when reaching specific threshold.

Regarding burst credits accumulated in Provisioned mode, they won't be used unless you switch back to Burst mode.

With 20GB storage being used in Burst mode, EFS can drive a throughput of 1 MiB/s, and up to 100 Mib/s if burst credits are available. Burst credits accrue in proportion to the file system's size at the file system's base rate. 50 MBps of burst credits are accrued for every TB of storage. Your EFS file storage will accumulate burst credits if it is inactive or driving throughput below its baseline rate. For example, your 20-GiB file system can burst (at 100 MiB/s) for 1 percent of the time if it's inactive for the remaining 99 percent. Over a 24-hour period, the file system earns 86,400 MiBs worth of credit, which can be used to burst at 100 MiB/s for 14.4 minutes.

One solution is you could be adding dummy data to increase the system storage size so you accumulate more burst credits and get a higher baseline throughput.

answered 6 months ago
  • Appreciate your answer! The tip for settings icon above the charts is definitely helpful.

    Can you share how to get the total/average actual throughout my EFS is using? Based on the Cloudwatch metrics it seems to suggest that my system is using less than 1.5Mbps (600k read, 900k write) on 1s average, but switching to 2Mbps provisioned seems to drag my system to a standstill? How's that possible? Where to get the actual, total throughput?

  • I'd recommend looking into MeteredIOBytes which includes data read, data write, and metadata operations. Also compare this with PermittedThroughput to know if you are maxing the provisioned throughput.

  • That metric is even more confusing, showing 8k bytes peak average when read/write combines to 1.5M

    Additionally, I tried creating dummy file but it doesn't get recognized by EFS

    • fallocate doesn't work
    • xfs_mkfile -n 30g .dummy_file_30g_efs, creates file, but not recognized

    Reason I want to use empty dummy file is because I have backup enabled, don't want this dummy file to eat up lots of spaces on backup.

  • Try using sudo dd if=/dev/urandom of=test_large_file bs=1024k count=256 status=progress. I advise to exclude this path from your backup process.

  • Thanks for suggestion, but I'm using EFS auto backup (AWS Backup) which doesn't seem to support such granularity for excluding specific file/folder?

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions