EBS gp3 volumes in Severely Degraded state
Hello,
We are running 3 i3.xlarge EC2 instances for a database cluster. We also attach a 750 GB gp3 EBS volume to each instance. We have a process that copies our backup snapshots from the i3.xlarge's instance store volume to the attached EBS volume for backup purposes.
We've noticed that when initially attached, our EBS volumes are getting expected throughput and backups are completing quickly. After a few hours, the EBS volumes start going into the warning state and say that they are either getting degraded or severely degraded I/O performance. As such, throughput is very low and backups take a long time to complete.
We are wondering why the volumes start out with expected performance and seemingly degrade over time. Here is how the bandwidth looks once the volumes enter the severely degraded state: https://imgur.com/a/uDmPfnl
Relevant questions
Severe IO performance drop after modifying gp3 volume (state OPTIMIZING)
asked 8 months agoConditions to attach EBS volumes
asked 2 months agoEvery stack update tries to optimize gp3 volume
asked a year agoWe have 2 volumes can't detach and delete
asked 2 months agoIs it redundant to have an EC2 instance and its EBS volumes in the same AWS Backup resource assignment?
Accepted Answerasked 7 months agoAWS Backup - two ec2 instances are assigned as the protected services - but I see an EBS snapshot in the vault?
asked 6 months agoEMR with GP3 EBS storage and M6i Instance Types
Accepted Answerasked 5 months agoHow to attach EBS volume from original EC2 instance to a new EC2 instance?
asked 2 months agoSnapshots of encrypted EBS volumes
asked 16 days agoEBS gp3 volumes in Severely Degraded state
asked 2 months ago