Writing capacity limit on DyanmoDB table

0

We have several tables with data being fed into them every 15seconds or so, and we have auto scaling on for the write capacity, but after reaching the 40000 default write limit, the table updates have become spotty and unreliable.

Just wondering if we need to do anything else in our settings, or change our payments in order to continue having smooth data uploads to our DynamoDB table.

We created a new table and using the same data and rule (just changing the table destination) we got back our regular table upload schedule of about 15 seconds.

Any help appreciated, thank you.

Pete
質問済み 4ヶ月前189ビュー
3回答
2

You mention things got unreliable after reaching the limit of 40,000 and that writes have become unreliable, but you have not stated how so.

The throughput limits are just guard rails and can be increased using Service Quotas to any value you deem necessary. There is not additional cost to increase limits, however, you are enabling your application to consume more capacity, which can increase costs based on usage.

When you have your limits increased, you may need to consider pre-warming the table further, to avoid any throttling while you try to achieve more throughput that your tables have done so in the past.

profile pictureAWS
エキスパート
回答済み 4ヶ月前
0

Hi,

A key decision for DDB Autoscaling to work best is the choice of the table partition key. The right one ( i.e. with high cardinality) will allow I/Os to be spread across many servers and then you'll get smooth scaling and good performances. If all I/Os happen in same partition, you'll start get throttling and other performance bottlenecks.

You should check this post re. proper choice of that key (and maybe modify your table design accordingly): https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/

Then, I'd suggest to read this post: https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/ to see how to they achieve 1 million requests per second with good performances. The traffic pattern that was chosen for the benchmark seems close to yours.

Best,

Didier

profile pictureAWS
エキスパート
回答済み 4ヶ月前
0

A tip for achieving best throughput on bulk writes to DynamoDB: randomize the order of the input records. I don't think this is your issue, but it can be a good practice if you're not familiar with the nuances of DynamoDB's partition behavior.

回答済み 3ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ