committed throughput charges in dynamodb table


hello All,

I have seen the sudden spike on dynamodb cost on particular day which costed us 4300$ change in single day. after deugging on cost explorer , I fount that commited throughput was the reason which spiked the cost. i need to understand why it happened as i never seen such cases in past .

what i am suspecting :-

  • on previous day, i put one dynamodb table to ondemand mode from provisioned mode to create the GSI of roughly 70 GB . reason for doing so is that i was getting error of max limit exceeded when trying to crease GSI on provisioned mode.
  • but while cheking on cloudwatch , there were no spike as such on any of the metrics .

need to understand why it happened.cost explorer view

3 Answers

You can take it further to check in Cost Explorer - you can filter out the CommittedThroughput in the API Operation filter, and then group your usage by "Usage Type". If the top usage type look something like "pay per request", then you will know it's a result of your table in on-demand mode. The pricing for throughput in tables in on-demand mode is often higher than the provisioned mode (depends on your usage patterns of course) - - depending on your actual writes/reads throughout the hour/day. With tables in the provisioned mode you are limited with the amount of WCU/RCU that you get (you can only use up to what you provision), while with the tables in on-demand mode you can use much more at a single point in time, however the trade-off will be that it will often be more expensive (again, depends on your requirements and the usage patterns).

It depends on your need, and whether your write/read patterns are known. If you know how much WCU/RCU you will need at any given time, you should try using tables in provisioned mode. A good thing to explore is to combine it with Auto-Scaling for situations when you'd need more throughput so you wouldn't run into "not enough capacity" issue; Moreover, for tables in provisioned mode, you can actually reserve capacity, and save extra (around 50%) over time - check info about "reserved capacity" here

Though, if your usage patterns are unexpected, and you will often have these spikes in write and read requests, then on-demand mode will suit better. But you have to consider the cost.

profile pictureAWS
answered 9 months ago

It looks like you changed the table capacity mode from provisioned to on-demand to prevent throttling during the GSI creation. Please note that there are certain changes done to the table and partitions during these switches that can have capacity and cost implications. See here for details:

During the switching period, your table delivers throughput that is consistent with the previously provisioned write capacity unit and read capacity unit amounts. When switching from on-demand capacity mode back to provisioned capacity mode, your table delivers throughput consistent with the previous peak reached when the table was set to on-demand capacity mode.

To avoid running into into GSI throttling errors, refer to these articles for insights:

answered 8 months ago


I suggest you create a support ticket regarding your question (see here on how to do this: One thing to note is that the Cost Explorer only looks at data up until the day before, see

All costs reflect your usage up to the previous day. For example, if today is December 2, the data includes your usage through December 1.

profile pictureAWS
answered 9 months ago
  • i have already raised a ticket but not getting the proper answer from the team. also i have attached the metrics for 1 day only but that i captured yesterday only, so all the usage should have update by then

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions