1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
The number of messages that you will get depends on the load on the queue, where there will be more load, you will see more messages in a batch.
You are charged for each API call, the price is $0.40 per 1 Million API calls. You can find more pricing information here.
When you use long polling, the time you specify is the max wait time for messages. If there are messages in the queue, the call will return immediately.
The visibility timeout that you specify when calling the API has precedence over the visibility timeout defined on the queue.
After saying all of that, why are you consuming the messages this way? Why not trigger the function from a queue and let Lambda handle all of it for you? See the doc.
Contenus pertinents
- demandé il y a un an
- demandé il y a 6 mois
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a 7 mois
Appreciate your answer. We are running this way instead of configured the queue triggering the lambda is because this is very busy system (Actually we did try that in our production system and ran out of lambda number of instance allowance). If there are thousands or millions of message in the queue, we are very easy to run out of lambda instance limit and we did experience that. Is this a valid concern? Any other suggestions?
Just ask for a limit increase. We increase that limit easily. Otherwise, the way you operate now, you have a single concurrency, which may not be enough to handle all the messages.