- Newest
- Most votes
- Most comments
Hello,
EFS Monitoring tab can help you look into those metrics, there is a settings icon above the charts on the right side, which then you can add Permitted throughput & Burst Credit balance. If those are critical metrics for you, I'd recommend adding AWS CloudWatch alarms (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) when reaching specific threshold.
Regarding burst credits accumulated in Provisioned mode, they won't be used unless you switch back to Burst mode.
With 20GB storage being used in Burst mode, EFS can drive a throughput of 1 MiB/s, and up to 100 Mib/s if burst credits are available. Burst credits accrue in proportion to the file system's size at the file system's base rate. 50 MBps of burst credits are accrued for every TB of storage. Your EFS file storage will accumulate burst credits if it is inactive or driving throughput below its baseline rate. For example, your 20-GiB file system can burst (at 100 MiB/s) for 1 percent of the time if it's inactive for the remaining 99 percent. Over a 24-hour period, the file system earns 86,400 MiBs worth of credit, which can be used to burst at 100 MiB/s for 14.4 minutes.
One solution is you could be adding dummy data to increase the system storage size so you accumulate more burst credits and get a higher baseline throughput.
Relevant content
- asked a year ago
- asked 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 5 months ago
Appreciate your answer! The tip for settings icon above the charts is definitely helpful.
Can you share how to get the total/average actual throughout my EFS is using? Based on the Cloudwatch metrics it seems to suggest that my system is using less than 1.5Mbps (600k read, 900k write) on 1s average, but switching to 2Mbps provisioned seems to drag my system to a standstill? How's that possible? Where to get the actual, total throughput?
I'd recommend looking into MeteredIOBytes which includes data read, data write, and metadata operations. Also compare this with PermittedThroughput to know if you are maxing the provisioned throughput.
That metric is even more confusing, showing 8k bytes peak average when read/write combines to 1.5M
Additionally, I tried creating dummy file but it doesn't get recognized by EFS
Reason I want to use empty dummy file is because I have backup enabled, don't want this dummy file to eat up lots of spaces on backup.
Try using sudo dd if=/dev/urandom of=test_large_file bs=1024k count=256 status=progress. I advise to exclude this path from your backup process.
Thanks for suggestion, but I'm using EFS auto backup (AWS Backup) which doesn't seem to support such granularity for excluding specific file/folder?