By using AWS re:Post, you agree to the Terms of Use
/EBS gp3 volumes in Severely Degraded state/

EBS gp3 volumes in Severely Degraded state



We are running 3 i3.xlarge EC2 instances for a database cluster. We also attach a 750 GB gp3 EBS volume to each instance. We have a process that copies our backup snapshots from the i3.xlarge's instance store volume to the attached EBS volume for backup purposes.

We've noticed that when initially attached, our EBS volumes are getting expected throughput and backups are completing quickly. After a few hours, the EBS volumes start going into the warning state and say that they are either getting degraded or severely degraded I/O performance. As such, throughput is very low and backups take a long time to complete.

We are wondering why the volumes start out with expected performance and seemingly degrade over time. Here is how the bandwidth looks once the volumes enter the severely degraded state: