Questions tagged with Amazon Simple Queue Service
Content language: English
Sort by most recent
Hello,
I'm trying to have an event driven solution where the SQS que sends messages to lambda as soon as it receives the message. I have a third party application that sends messages to SQS and assumes a role in which gives it permission to perform actions needed on the SQS.
I am kind of confused what I need to consume an SQS que from lambda. In the documentation it indicates that to poll events from Lambda different permissions are needed, and then it gives directions for "event driven" triggers to lambda from the SQS Que.
I am writing all of this in CDK so maybe that's where I may be missing something at.
So far in my CDK I have the SQS message able to be consumed by lambda I believe, by using
`sqsQue.grantConsumeMessages(Mylambda)` which indicates [here ](https://docs.aws.amazon.com/cdk/api/v1/docs/@aws-cdk_aws-sqs.Queue.html) that this will allow SQS messages to be consumed by grantee which is my lambda. I am not certain if this adds a resource based policy to lambda to SQS, I'm assuming that it adds it to SQS and I do not need to add it to the lambda.
However, for the lambda in my CDK I just have the default execution policy and I do not believe I added a resource based policy. I'm not even sure if it is needed for my use case
So do I need resource based policy for this? Or do I need anything particular in my execution role?
Also is there a difference between event driven SQS triggering lambda vs lambda polling from SQS, aren't these two separate implementations?
I have my FIFO queue connected with a Lambda function. When a message is sent to the queue the function takes and processes it. My problem occurs when more than one message is sent to the queue: I understood that with a FIFO queue, basically, all the messages are processed one by one (if they share the same GroupID). This is not what happens to me: when I send more than one message (5 for example) to the queue in few seconds, than the first message goes in flight while the other messages are waiting, but when the first message process is completed then all the other 4 messages go together in flight!! How can this happen if they share the same GroupID and they are is a FIFO queue? I expect that when the first is completed then the second goes in flight, and the other 3 waits and so on!
I think it doesn't depends on the queue setting because I changed all the parameters many times (Visibility time out, content based deduplication and so on). In any case I leave you, below, the screenshots of the parameters setting I now have.
My account has a maximum of 10 concurrent executions and it is the exact number of messages that are in flight together at maximum (I tried to send many messages and what happens is that the first is in flight alone and then ten by ten all the others get in flight concurrently, very strange to me). I would like for each group only one execution at a time, the others must wait for the completion of the one that is processing. I want to manage concurrency by the different groupID I give to the message in the queue. I'm sending messages to the queue through aws-sdk in node.js.
Can someone help me please?



Hi,
We have a sqs normal queue (not fifo queue) which the producer will publish thousands to millions message into, so the live message is constantly about 5k - 1million
we are using python boto3 receive_messaage to poll message.
message = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MessageAttributeNames=[
'All'
],
VisibilityTimeout=120,
MaxNumberOfMessages=MAX_RECEIVE_MESSAGES,
WaitTimeSeconds=MAX_WAIT_TIME_SECS
)
To consume the message
* We had a lambda subscribe to the queue and start processing the messages, the lambda does not call other API just poll messages, based on a certain values of certain field, for instance student_last_name="Smith" then publish/re-route to different SNS topics. In this design pattern with millions message going on, won't we run out of lambda concurrent instance limit very soon? We actually had another similar queue where a similar lambda takes about 4 seconds to finish (it is a more complicated lambda interact with other systems' APIs), and we quickly ran out of lambda instances.
* Another design is having the lambda triggered by cloudwatch event rule every 1 min, the lambda will use the mentioned python receive_message call by setting MaxNumberOfMessages=10 (max allowance), and calling receive_message in a loop. The loop only exit either runs a certain times or collection around 10000 messages then start processing these 10000 messages , finally route them to different sns topics.
both of these design seems not perfect for us. the first one, has its risk and the second one only allows us to process every 1 min. I wonder AWS architect and expert could provide further guidance? Thank you very much.
Hi,
we have a normal sqs (not FIFO), and trying to pull messages from it through python boto3 receive_message call specified here : https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#SQS.Client.receive_message
message = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MessageAttributeNames=[
'All'
],
VisibilityTimeout=120,
MaxNumberOfMessages=MAX_RECEIVE_MESSAGES,
WaitTimeSeconds=MAX_WAIT_TIME_SECS
)
and having a few questions.
* With understanding WaitTimeSeconds > 0 is called long polling. Even if we have around 10 messages in sqs, setting WaitTimeSeconds = 2 secs, MaxNumberOfMessages =10, we still get 1 message by calling receive_message. Is that expected behavior?
* Based on developer document, MaxNumberOfMessages does not guarantee receiving the specified number of messages but just give the API a chance to scan through servers which containing messages. Am I understanding it correctly?
* We program a loop to call the above api multiple times (times =10) and do get 1 message each time - total 10 messages. We wonder if such programming style would hit some sort of API limit (only allow certain times of receive_message call in a period). Is there such limit existing?
* Will we be charged by each receive_message call? What is the amount of charge by each call?
* Is it true only when the queue has > 5000 messages then we will see long polling returns more than 1 message?
* the api allow us to specify VisibilityTimeout. However, if the SQS has a specified visibility timeout during creation. Which one will take affect, the one specified in API or the one specified during SQS creation?
* where in real life, we will have a million message in the sqs at the very beginning. if we set WaitTimeSeconds = 2 secs, will the call return in 2 secs or possibility less than 2 secs in case the single long polling returns 10 messages within 1 sec?
* our lambda is waken up by cloudwatch event every minute (shortest time can be specified by cloudwatch event rule), once waken up, it will run into a loop which keeps calling receive_message with MaxNumberOfMessages =10 until either the loop runs N times or received M number of messages. Any issue with this design?
Sorry for so many questions. Thank you!
I am running rekognition video-segment-detection in a notebook using the guideline from https://docs.aws.amazon.com/rekognition/latest/dg/video-analyzing-with-sqs.html. However, the SQS never receives a Jobstatus update. So my task run forever. I used AWS CLI to confirm that the video analysis has succeeded long ago.
AmazonS3FullAccess, AmazonRekognitionFullAccess, AmazonSNSFullAccess and AmazonSQSFullAccess have all been granted to the IAM user used to run this script.
What could be the reason why the the SNS topic did not update SQS queue when the rekognition segment analysis completed?
SQS queue is implemented with a JMS Message Listener mode with Client ACK session mode. When the messages are read they are delegated to another JVM process to continue processing then Delete the message from the SQS queue. While the messages are deleted successfully. We see two different issues with this
1. SQS System message Id for each read message is added to a list in the JVM1 where the message is read from the queue.
2. Under load the number of inflight messages keeps on continously increasing overtime. We have 16 consumers in each JVM process to process the queue.
Does SQS support CLIENT_ACK mode across multiple JVM processes? i.e Read from 1st JVM and delete on the 2nd. how can we stop the SQS library build up the list of Message Id's in the RangeExtender implementation?
And, what do we need to do to keep the messages from getting deleted in a timely manner to avoid the build up of inflight messages. under normal load we do not see this issue. But when TPS increases beyond a certain point we see the inflight messages count keeps increasing.
Could not create SQS FIFO queue in AWS Govcloud(US) PDT . Can you please help out on following asks?
1/ Pls check diff documentation captures this feature gap in PDT region - https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-sqs.html#govcloud-sqs-diffs
2/ Expected timeline when this feature will be available in AWS Govcloud(US)
Hi,
We are basically trying to make use of SQS to process the upload request to S3 as we get simultaneously many requests for upload during the peak hours.
As application is hosted on elastic beanstalk with auto scaling , it manages to an extent but we need to make the upload much faster and smooth typically file size ranges from 50MB to 100MB
Any ideas on infrastructure or ref’s to follow?
We came across the worker environment of elastic beanstalk but not sure if it’ll solve the problem.
So we are also thinking to make a serverless architecture using lambda , SQS and S3
Any help really appreciated. Thanks
Hi, I'd like to know if there is a way to support ring buffer functionality to SQS queues?
My infrastructure has multiple usecases of just requiring a latest data snapshot across a variety of apps. The ideal scenario is if I can set a maximum queue size to 1 message that all consumers have read access to, while the queue (or some other AWS piece) can manage the queue size to only contain the latest snapshot.
Currently, the only way I have found to support my use case is to have one queue by consumer, which will become very difficult to manage properly. The ring buffer functionality would effectively solve this issue.
Thanks in advance!
I am planning to build a long-running event processing system. I already have EC2 instances for my application and want to utilize the same compute to process long-running events. I am planning to use SQS with higher visibility timeouts and if the processing goes beyond that limit, I will change the message visibility time while processing. Can this pattern be used for scheduled events from the event bridge (do we have a concept of visibility timeout for event bridge events or do we have to send those to the SQS to get the same pattern)? Also, curious to know about ECS as a long-running compute for such events in the future (if I want to separate the compute from my existing ec2 compute), do we have to follow the same change in visibility timeouts for ECS as well if so?
I have had a notification of TLS 1.2 updates for SQS June 28.
Does boto1.9b work with this please?
(OS Python 2.4)
Typical Pattern to execute Step function for an SQS Event is to create Lambda with SQS Event Source Mapping and run an express step function synchronously within Lambda. If step function fails, error can be thrown out of lambda, which keeps the message in SQS and will be retried by poller, after visibility timeout.
This process has one key disadvantage. During the step function process, Lambda also stays active and hence we will be paying for execution of both.
Alternatives:
If we execute step function asynchronously,
* We can't use SQS + Lambda Retry, since we mark Lambda process as success and message is deleted from SQS.
* We also can't use FIFO SQS, since message will be removed before the event is fully processed by step function and next one is queue with group id might be picked up right away.
Question:
* Are there any alternate designs without writing our own SQS poller?
* If we need to write a poller, how can we effortlessly scale , like AWS Event Source Poller does?