- Newest
- Most votes
- Most comments
You're right about the options in this regard. You can use a Scan with a filter expression to fetch records with update time within your range. This option is not very selective or efficient as you'll consume capacity to read the entire table. But if this is not a frequent query pattern it might be a reasonable choice.
The other option if you want to support this access pattern at higher frequency without inefficiency is to collect all the items into a single item collection (same value of partition key attribute) so that you can Query and use a key condition expression to select your time range within the sort key value. This limits scalability though as a single item collection like this can only be expected to support up to 1000 write units per second or 3000 read units per second. You can improve this by distributing across a known set of item collections - just as you've done in your second design scenario. This pattern is discussed in the documentation here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-indexes-gsi-sharding.html In this way, you "scatter" the write traffic across a number of different item collections (so DynamoDB can scale horizontally across multiple partitions, and then you "gather" by making multiple Query calls across all your item collections.
Then if I have about 10 reads and 70 inserts in a minute the first method (using the same value of partition key) will work for me.. right?
The problem that I came across with the second solution (distributing across a known set of item collections) that you cant use skip so easily, you will need to send each request the skip value...
Relevant content
- asked 9 months ago
- asked 6 months ago
- asked 6 days ago
- asked 4 days ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated 2 years ago