Writing capacity limit on DyanmoDB table

0

We have several tables with data being fed into them every 15seconds or so, and we have auto scaling on for the write capacity, but after reaching the 40000 default write limit, the table updates have become spotty and unreliable.

Just wondering if we need to do anything else in our settings, or change our payments in order to continue having smooth data uploads to our DynamoDB table.

We created a new table and using the same data and rule (just changing the table destination) we got back our regular table upload schedule of about 15 seconds.

Any help appreciated, thank you.

Pete
已提問 4 個月前檢視次數 188 次
3 個答案
2

You mention things got unreliable after reaching the limit of 40,000 and that writes have become unreliable, but you have not stated how so.

The throughput limits are just guard rails and can be increased using Service Quotas to any value you deem necessary. There is not additional cost to increase limits, however, you are enabling your application to consume more capacity, which can increase costs based on usage.

When you have your limits increased, you may need to consider pre-warming the table further, to avoid any throttling while you try to achieve more throughput that your tables have done so in the past.

profile pictureAWS
專家
已回答 4 個月前
0

Hi,

A key decision for DDB Autoscaling to work best is the choice of the table partition key. The right one ( i.e. with high cardinality) will allow I/Os to be spread across many servers and then you'll get smooth scaling and good performances. If all I/Os happen in same partition, you'll start get throttling and other performance bottlenecks.

You should check this post re. proper choice of that key (and maybe modify your table design accordingly): https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/

Then, I'd suggest to read this post: https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/ to see how to they achieve 1 million requests per second with good performances. The traffic pattern that was chosen for the benchmark seems close to yours.

Best,

Didier

profile pictureAWS
專家
已回答 4 個月前
0

A tip for achieving best throughput on bulk writes to DynamoDB: randomize the order of the input records. I don't think this is your issue, but it can be a good practice if you're not familiar with the nuances of DynamoDB's partition behavior.

已回答 3 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南