1 Answer
- Newest
- Most votes
- Most comments
0
Not very familiar with Celery, but my guess is that it reads messages from SQS in batches and then it distributes the batches to the different works, without honoring the message group ID. If there is a way, configure the batch size to 1. This will solve the issue. Otherwise, you will need to set your workers to 1, and then you will be only able to process a single message at a time.
Relevant content
- asked 7 months ago
- asked 6 years ago
- asked 3 months ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 3 years ago
Thanks for your response. There does not seem to be a direct way to configure the batch size in celery. and setting worker to 1 will cause performance issues.
As per amazon sqs fifo documentation :
'When receiving messages from a FIFO queue with multiple message group IDs, Amazon SQS first attempts to return as many messages with the same message group ID as possible. This allows other consumers to process messages with a different message group ID. When you receive a message with a message group ID, no more messages for the same message group ID are returned unless you delete the message or it becomes visible.'
According to above description I understand that even with multiple consumers (multiple celery workers in my case) it should still work(honor message group ids). or am I missing out on something? Any help would be appreciated, thanks.