- Newest
- Most votes
- Most comments
First, I would check of there is a way to increase the limit. A lot of service limits are soft and they can be increased.
In either case, you will probably not be able to increase the limit to very high numbers, so you will need to handle the concurrency yourself. One way of doing this would be to use DynamoDB to count how many active sessions you have. Before you start the Comprehend job you try to increment a counter in DDB with a condition that it is < Limit. If it succeeded, you go to the next step to start the Comprehend job. If it fails, you go into a Wait state and then try again. When the Comprehend job finishes, you increment the counter, without condition. To reduce the number of state transitions, The Wait state should be longer than the Wait when running the job.
A different, more complex solution, but with less state transitions, might be to use the Wait For Callback pattern. Every time the state machine fails to decrement the DDB counter, it will add a callback token to DDB item. You will create a DDB stream with a Lambda function, that every time the value of the counter goes below the limit, it will take a token from DDB and make a call with that token. You can create a Filter for the Lambda that consumes the stream to reduce the number of invocations.
Relevant content
- Accepted Answerasked 8 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated 3 years ago
On the SFn + DDB concurrency side, I'm aware of this SAM-based sample which shows a nice pattern. I previously ported it to CDK in this (Python) sample. However, for really big bursts you still have lots of DDB UpdateItem retry requests with that approach.