I took a look at your instance, there is no issue with the storage volumes on your instance. Rather, you have 2t gp2 allocated storage and your baseline performance in this case is 6000.
Your workload is consistently using higher IOPS than baseline.
Currently your burst balance is completely depleted, so you are getting throttled at the baseline IOPS of 6000.
Here is a blog with more info about burst verus baseline:
You could increase IOPS by allocating a larger gp2 volume.
In your case, because you have a legacy volume layout, the conversion to larger storage will occur online but will take about 24 hours.
Alternatively you are using a lot of READ IOPS, you might be able to tune your workload to do fewer reads.
AWS RDS Aurora latency baselineasked a year ago
Latency issue between RDS and Lamdaasked 3 months ago
Random Spikes in Approximate Age Of Oldest Messageasked 7 months ago
RDS volume - Latency & I/O spikesasked 3 days ago
Severe IO performance drop after modifying gp3 volume (state OPTIMIZING)asked a year ago
Aurora PostgreSQL: Repeated periodic write IOPS spikes after upgrade to 13.6asked 8 months ago
Potentially degraded RDS volume - Latency & I/O spikesasked 3 years ago
Write latency elevated without any obvious causeasked 3 years ago
Rds cross region latency issueasked 6 months ago
Experiencing latency on DynamoDB due to CredentialsRequestTimeasked 7 months ago