RDS Postgres - Is lower provisioned IOPS better than higher baseline IOPS?

0

We have a Postgres running with a 5TB storage.
And we have provisioned 5K IOPS.
But, after reading the AWS documentation we found that storage with 5TB has more than 15K baseline IOPS. And also cheaper.

Now I'm confused.
Is lower provisioned IOPS better than higher baseline IOPS?

(5K P-IOPS vs 5TB gp2 storage)

0dbdbdb
asked 5 years ago636 views
3 Answers
0
Accepted Answer

Yes, 5T will have a baseline of 15k IOPS.

The docs explain the difference https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

"AWS designs gp2 volumes to deliver 90% of the provisioned performance 99% of the time.
... whereas io1 delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the time over a given year."

Also io1 has higher throughput capacity (gp2 is pretty good however 250mb/s).

So gp2 might be a much better option. I did not find your instance, so I could not comment on your capacity usage.

-Phil

AWS
MODERATOR
philaws
answered 5 years ago
0

I think that is answered in the other thread you found, and I did add the link to a video where Grant discuss bursting (he talks about both T2 and GP2 as they were newish capabilities at the time). Thanks for giving me an excuse to talk about this some more...

The main thing I'll re-emphasize is that IO1 has a better SLA (both explicit and implicit) than GP2, which largely explains IO1's high cost. By way of (not necessarily accurate) explanation, the bursting concept, such as used in GP2, is one of over-provisioning resources based on statistical and historical usage patterns. So you mix different volumes on a set of resources (SSDs, servers, network adapters, etc.) that can support them going at baseline but not all going at burst. You do this based on both the volumes' historical activity and statistical assumptions about how likely actual IO will approach the resources' capacity. If the usage across multiple volumes changes from historical patterns, and you approach or exceed resource capacity, you consider migrating volumes to re-balance the load. During the hot periods, and any re-balancing, your volume can see increased latency, and potentially less than its targeted IOPS. This is all designed to keep the latency and IOPS within a range that most apps would never notice it, though if the app is super-sensitive to latency it can.

With IO1 (again, not claiming 100% accuracy in my explanation) volumes are placed on resources that can deliver all the provisioned IOPS of all the volumes they host. So one volume's performance doesn't suffer as the usage patterns from other volumes assigned to the same resources change. That doesn't mean there isn't some variability in performance, but it is within a much tighter SLA that takes into account all manor of occasional wonkiness found in any complex system.

HalTemp
answered 5 years ago
0

ok. thank you!

0dbdbdb
answered 5 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions