Questions tagged with Amazon Simple Queue Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I have built a Python web app that accepts requests through GatewayAPI and sieves them via Lambda functions and SQS. The app also uses Redis and is deployed using an EC2 instance connected to Lambda functions via Load Balancers and Target Groups. Currently, the app is working perfectly as expected. However, when I deploy the same app into a different EC2 instance with the same specs and connect them to the same Lambda functions via different instances Load Balancer and Target Group, it fails to work properly despite having received the correct requests. The two EC2 instances use the same Redis server although with a different key. I have debugged every line of my code and still can't seem to find what or where the bug is. I am almost sure that I am doing something wrong on the AWS end. Could anyone help with where things might have gone wrong?
1
answers
0
votes
43
views
asked 5 months ago
I have a use case where I need to read the messages from the SQS queue as and when it arrives. Note: I'm can't use @SqsListener spring cloud aws. I'm trying to get it done only with aws java SDK
1
answers
0
votes
42
views
asked 5 months ago
Hello, I might be misunderstanding how this is supposed to work, but I have an eventbridge rule with a lambda target configured and everything works perfectly. I also set a DLQ on the eventbridge trigger. My understanding of how that works is when an event fails to send, after the event bridge has exhausted its own retry policy (configured to 1 hour and 0 retry attempts) it will push the event to the DLQ I've configured. However when turning off the lambda which is configured as the target (Setting the throttle on the lambda to 0) I'm not seeing any events coming into my DLQ even after 24 hours. Does a throttled lambda not count as a failed event send request? How can I further debug the issue here? Kind regards
1
answers
0
votes
48
views
asked 5 months ago
I'd like to publish a message to my sns topic. \ I'm using js "aws-sdk/client-sns" with the `snsClient.send(new PublishCommand(params)`. \ Subscribed to my sns topic there is a sqs queue with a filter sub policy. \ The issue is with the message attributes, I want to send as value a json object as in this example: \ `MessageAttributes: { filter: { DataType: 'String', StringValue: "filter" } }, object: { DataType: 'String', StringValue: JSON.stringify( object ) } }` \ the message never get to the queue and fails under "NumberOfNotificationsFilteredOut" or "NumberOfNotificationsFilteredOut-InvalidAttributes". \ Instead, if the queue does not have a filter sub policy or I set the object as `DataType: 'String.Array'`the message arrive without any problem, is this a bug? How should I set my message attribute if I want to send an Object ? Thanks.
1
answers
0
votes
58
views
dnt994
asked 5 months ago
Hello: I'm trying to add another S3 bucket to an existing SQS Queue. Try as I might, I can't find a way to add it to the existing policy. Example: { "Sid": "example-statement-ID", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:us-east-1:0645xxxxxxxx:HammerSQS1", "Condition": { "StringEquals": { "aws:SourceAccount": "064xxxxxxxx" }, "ArnLike": ": {[ "aws:SourceArn": "arn:aws:s3:*:*:contentdisarming-bucket-one" ** "aws:SourceArn": "arn:aws:s3:*:*:contentdisarming-bucket-two"] ** { } } } ] } When I try to add the 2nd bucket (contentdisarming-bucket-two) I get an error: "Invalid JSON" What am I doing wrong here? It's possible to add more than one S3 SourceArn to an SQS queue, correct? Thanks in advance..
2
answers
0
votes
265
views
profile picture
asked 5 months ago
When a Lambda function receives an SQS message and fails to process it within the visibility timeout of the queue, which of the following happens? - The Lambda function retries to process the message **after the visibility timeout ends.** - The Lambda function retries to process the message **as soon as possible, even before the visibility timeout ends.** NOTE: - Suppose that the SQS queue is a fifo queue.
2
answers
0
votes
65
views
asked 5 months ago
For some reason from time to time during last 2 weeks, while i was developing serverless solution, lambda become stuck on using aws-sdk v2 and aws-sdk v3. It runы code before the moment it needs to send SQS message or upload json file to S3 and then stuck. When I used aws-sdk v2, I added a logger to it for debugging, during normal invocation it showed me that it sent it well (and some results), but when it stuck it doesnt return anything. I tried to change lambda timeout to 6sec,10sec, 60 sec. It just run out of time without any errors. Once I redeploy it, it can start working again. (but in some cases it's not). I also tried to create client outside of the handler, inside the handler. When I consoled log the client, I could see that it has an object. Syntaxis: in aws-sdk v2 I used ``` const sqs = new AWS.SQS({apiVersion: '2012-11-05'}); sqs.sendMessage(params).promise ``` in aws-sdk v3 i use ``` sqsClient.send( new SendMessageCommand({ ...param }) ); ``` Can it be related to bad ENI or something ? Does anyone have any ideas on what can be the reason of that behaviour * runtime: nodejs16.x. * architecture: arm64
0
answers
0
votes
28
views
Arcuman
asked 6 months ago
Is it possible to give high priority with SQS Standard Queue or FIFO Queue
1
answers
0
votes
69
views
asked 6 months ago
Hi, I am building a service which need a queue for sending message from one service to another service. For this i need a FIFO queue but FIFO queue has a API call limit of 3000/sec and with batch of 10 message i can maximum get 30000 message per second. But my Application need more API Call limit as i need around 50k to 60k message per second. Is this possible by any way that i can achieve this limit with SQS FIFO. Or is there any other solution for this problem?
2
answers
0
votes
51
views
asked 6 months ago
I am currently using Amplify to set up my backend because Appsync is a key part of my stack. However, I also use SQS to publish certain messages into a queue. This is easily done using the in-code editor of lambda but since I'm running lambda functions locally now, I am unable to send messages to the queue now with the following error: InvalidParameterValue: The request has a 'X-Amzn-Trace-Id' HTTP header which is reserved for AWS X-Ray trace header and has an invalid value 'amplify-mock-x-amzn-trace-id' This is my code to send messages into the queue. response = await sqs.sendMessageBatch(slackParams).promise().catch(async (err) => { console.log(response from sqs: ${err}); }); Once I do amplify push, the code works fine on the cloud lambda but the issue comes only when I do amplify mock api. (During local testing, I am unable to add to the queue) Is there any way I can mock my SQS setup locally as well? Any help is appreciated!
1
answers
0
votes
66
views
asked 6 months ago
Hi, Is there a way to setup an email notification when an inactivity timeout is exceeded while processing a message from SQS for an Elastic Beanstalk worker application? I think I have to use SNS along with maybe a configuration file editing the "aws:elasticbeanstalk:sns:topics namespace", but I wasn't sure exactly how to set up a configuration file and what the topic ARN and name is for the inactivity timeout.
0
answers
0
votes
22
views
asked 6 months ago
Hi, I have had an API gateway running as a webhook receiver -> SQS for a little over a month with no issues. As of 9/26, I started receiving an error on all requests on my only endpoint for this API two days ago. ``` { "Error": { "Code": "MalformedQueryString", "Message": "Keys may not contain ", "Type": "Sender" }, "RequestId": "XXXXX" } ``` This is the end points def in template. ``` responses: default: statusCode: "200" requestParameters: integration.request.header.Content-Type: "'application/x-www-form-urlencoded'" requestTemplates: application/json: "Action=SendMessage&MessageBody=$input.body" passthroughBehavior: "never" type: "aws" ``` Issue I am having is that the message body to the API is always truncated, with no discernable difference in the first 1024kb of the data with historical vs new messages that are failing. I can only assume after the first 1024kb of data, there must be a difference in the message being received? How do I troubleshoot this further?
1
answers
0
votes
50
views
asked 6 months ago