EBS gp3 volumes in Severely Degraded state
We are running 3 i3.xlarge EC2 instances for a database cluster. We also attach a 750 GB gp3 EBS volume to each instance. We have a process that copies our backup snapshots from the i3.xlarge's instance store volume to the attached EBS volume for backup purposes.
We've noticed that when initially attached, our EBS volumes are getting expected throughput and backups are completing quickly. After a few hours, the EBS volumes start going into the warning state and say that they are either getting degraded or severely degraded I/O performance. As such, throughput is very low and backups take a long time to complete.
We are wondering why the volumes start out with expected performance and seemingly degrade over time. Here is how the bandwidth looks once the volumes enter the severely degraded state: https://imgur.com/a/uDmPfnl
Severe IO performance drop after modifying gp3 volume (state OPTIMIZING)asked 8 months ago
Conditions to attach EBS volumesasked 2 months ago
Every stack update tries to optimize gp3 volumeasked a year ago
We have 2 volumes can't detach and deleteasked 2 months ago
Is it redundant to have an EC2 instance and its EBS volumes in the same AWS Backup resource assignment?Accepted Answerasked 7 months ago
AWS Backup - two ec2 instances are assigned as the protected services - but I see an EBS snapshot in the vault?asked 6 months ago
EMR with GP3 EBS storage and M6i Instance TypesAccepted Answerasked 5 months ago
How to attach EBS volume from original EC2 instance to a new EC2 instance?asked 2 months ago
Snapshots of encrypted EBS volumesasked 16 days ago
EBS gp3 volumes in Severely Degraded stateasked 2 months ago