Writing capacity limit on DyanmoDB table

0

We have several tables with data being fed into them every 15seconds or so, and we have auto scaling on for the write capacity, but after reaching the 40000 default write limit, the table updates have become spotty and unreliable.

Just wondering if we need to do anything else in our settings, or change our payments in order to continue having smooth data uploads to our DynamoDB table.

We created a new table and using the same data and rule (just changing the table destination) we got back our regular table upload schedule of about 15 seconds.

Any help appreciated, thank you.

Pete
asked 3 months ago173 views
3 Answers
2

You mention things got unreliable after reaching the limit of 40,000 and that writes have become unreliable, but you have not stated how so.

The throughput limits are just guard rails and can be increased using Service Quotas to any value you deem necessary. There is not additional cost to increase limits, however, you are enabling your application to consume more capacity, which can increase costs based on usage.

When you have your limits increased, you may need to consider pre-warming the table further, to avoid any throttling while you try to achieve more throughput that your tables have done so in the past.

profile pictureAWS
EXPERT
answered 3 months ago
0

Hi,

A key decision for DDB Autoscaling to work best is the choice of the table partition key. The right one ( i.e. with high cardinality) will allow I/Os to be spread across many servers and then you'll get smooth scaling and good performances. If all I/Os happen in same partition, you'll start get throttling and other performance bottlenecks.

You should check this post re. proper choice of that key (and maybe modify your table design accordingly): https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/

Then, I'd suggest to read this post: https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/ to see how to they achieve 1 million requests per second with good performances. The traffic pattern that was chosen for the benchmark seems close to yours.

Best,

Didier

profile pictureAWS
EXPERT
answered 3 months ago
0

A tip for achieving best throughput on bulk writes to DynamoDB: randomize the order of the input records. I don't think this is your issue, but it can be a good practice if you're not familiar with the nuances of DynamoDB's partition behavior.

answered 3 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions