Writing capacity limit on DyanmoDB table

0

We have several tables with data being fed into them every 15seconds or so, and we have auto scaling on for the write capacity, but after reaching the 40000 default write limit, the table updates have become spotty and unreliable.

Just wondering if we need to do anything else in our settings, or change our payments in order to continue having smooth data uploads to our DynamoDB table.

We created a new table and using the same data and rule (just changing the table destination) we got back our regular table upload schedule of about 15 seconds.

Any help appreciated, thank you.

Pete
질문됨 4달 전187회 조회
3개 답변
2

You mention things got unreliable after reaching the limit of 40,000 and that writes have become unreliable, but you have not stated how so.

The throughput limits are just guard rails and can be increased using Service Quotas to any value you deem necessary. There is not additional cost to increase limits, however, you are enabling your application to consume more capacity, which can increase costs based on usage.

When you have your limits increased, you may need to consider pre-warming the table further, to avoid any throttling while you try to achieve more throughput that your tables have done so in the past.

profile pictureAWS
전문가
답변함 4달 전
0

Hi,

A key decision for DDB Autoscaling to work best is the choice of the table partition key. The right one ( i.e. with high cardinality) will allow I/Os to be spread across many servers and then you'll get smooth scaling and good performances. If all I/Os happen in same partition, you'll start get throttling and other performance bottlenecks.

You should check this post re. proper choice of that key (and maybe modify your table design accordingly): https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/

Then, I'd suggest to read this post: https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/ to see how to they achieve 1 million requests per second with good performances. The traffic pattern that was chosen for the benchmark seems close to yours.

Best,

Didier

profile pictureAWS
전문가
답변함 4달 전
0

A tip for achieving best throughput on bulk writes to DynamoDB: randomize the order of the input records. I don't think this is your issue, but it can be a good practice if you're not familiar with the nuances of DynamoDB's partition behavior.

답변함 3달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠