Questions tagged with Amazon Simple Queue Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I am using webhook endpoint to push messages to SQS service pipeline looks: ``` gateway -> integration request -> SQS -> integration response ``` everything works fine but now third party integration requires time-to-time validation request, and i need to return calculated hmac sha256 hash in response Not sure if response template support all utils, but I am trying next Integration response template: ``` #set($token = $context.responseOverride.header.RequestBody) #if(!$token || $token == '') {"status": "ok"} #else #set($secretKey = "my-secret-key") #set($hmac = $util.cryptoHmac("HmacSHA256", "$token", $secretKey)) { "token": "$message", "hmac": "$util.base64Encode($hmac)" } #end ``` but looks like `$util.cryptoHmac("HmacSHA256", "$token", $secretKey)` not working, method returns null maybe somebody could help me to resolve this case
0
answers
0
votes
15
views
asked 9 hours ago
I am currently working on a lambda function in which I have to send a message to an SQS queue. The Lambda function sits inside of a VPC to allow connection with a peered network that the Function makes requests to. Whenever I try to send the message to SQS however code execution seems to timeout consistently. I had the same issue when I was trying to send commands to DynamoDB. ``` import { SQSClient, SendMessageCommand } from "@aws-sdk/client-sqs"; const sqsClient = new SQSClient({region: 'us-east-1'}); export const handler = async (event, context, callback) => { const response = await sqsClient.send(new SendMessageCommand(messageParams)); console.log(response); // <----- Doesn't reach here return callback(null, 'OK'); }; ``` IAM Permissions are all correct and the Security Group allows all traffic (When set to a VPC) So far, to specifically target the timeout problem, I've tried putting the function in a private subnet, public subnet, placing it in no VPC, replacing SDK v3 with aws-sdk v2 via a layer. None of these seem to have any impact on the issue. I haven't used VPC endpoints yet but I guess that shouldn't be necessary when the function is not connected to a VPC or in a public subnet?
1
answers
0
votes
23
views
asked 7 days ago
I have granted the following permissions for the IAM role on the deadletter queue: ``` sqs:DeleteMessage sqs:GetQueueAttributes sqs:ListDeadLetterSourceQueues sqs:PurgeQueue sqs:ReceiveMessage sqs:SendMessage ``` And I have granted the following permissions for the IAM role on the source (destination) queue: ``` sqs:SendMessage ``` However, when trying to start the DLQ redrive via the AWS console UI, it shows an error ``` Failed to create redrive task. Error code: AccessDenied ``` Upon viewing the browser developer console, the SQS API POST call is getting `403 Forbidden` on `Action=CreateMoveTask` The permission `sqs:CreateMoveTask` does not exist to grant to the IAM role so I am confused as to what permissions need to be granted to allow a DLQ redrive?
1
answers
0
votes
45
views
asked 8 days ago
Hello, Can somebody tell me the right option for the below question? The customer has an existing application running on a fleet of EC2 spot instances. This application processes millions of messages from SQS and generates CSV to be stored in s3. Customer is considering to move to ECS farget. What changes will be needed? a. Continue to use spot instances and bundle the app as a docker image. b. moving to containers is not recommended because containers will add overheads c. consider using the self-launch mode to migrate the existing ec2 instances to farget launch mode d. use DMS service to port the application
1
answers
0
votes
21
views
Monica
asked 8 days ago
I've a lambda that processes messages from SQS. The input queue has a redrive policy that causes messages to be moved to a DLQ if the lambda fails to process them after repeated attempts. This arrangement works and, if there are messages in the DLQ, I can send them back to the source queue using the the AWS console "Start DLQ redrive" button, along with the "Redrive to source queue(s)" option. For some messages, however, the lambda function decides to push them directly to the DLQ. For those messages, however, when I try a DLQ redrive using the "Redrive to source queue(s)" option, it fails with "Failed: CouldNotDetermineMessageSource". Is there any way that I can avoid this message, or does the "Redrive to source queue(s)" option only work for messages put in the DLQ by the AWS runtime ?
1
answers
0
votes
32
views
asked 8 days ago
I get the following error when I add 3 lines to my policy `Value of property PolicyDocument must be an object ` (Lines with #JustAdd) If I remove those 3 lines it works great whats wrong ? ``` Policies: - PolicyName: !Sub 'X-${AWS::Region}' PolicyDocument: - Effect: Allow Action: 'ssm:GetParametersByPath' Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/X' - Effect: Allow Action: 'ssm:GetParameters' Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/X/*' - Effect: Allow Action: 's3:*' Resource: '*' - Effect: Allow Action: - secretsmanager:GetSecretValue Resource: - !Sub 'arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:C*' - Effect: Allow Action: - 'ec2:DescribeNetworkInterfaces' - 'ec2:CreateNetworkInterface' - 'ec2:DeleteNetworkInterface' - 'ec2:DescribeInstances' - 'ec2:AttachNetworkInterface' Resource: '*' - Effect: Allow Action: 'kms:Decrypt' Resource: '*' - Effect: Allow #JustAdded Action: sqs:* #JustAdded Resource: 'arn:aws:sqs:us-east-1:000000000000:Q.fifo' #JustAdded RoleName: !Sub 'X-${AWS::Region}' ```
2
answers
0
votes
26
views
asked 10 days ago
Hello AWS team, I have an application that's sending data to the SQS that triggers a lambda function. I noticed alarms indicating I have messages going to my sqs deadletter que. I am trying to determine what's causing the application to fail based on the deadletter que. I'm wodnering if there's a location I can look where I can see the detailed messages and the body to determine why the messages are going to the deadleatter que. I'm not sure if it's client error or server error. I also have alarm for Lambda alert, so I believe it has something to do with server side but I am not show how to troubleshoot or the steps to determine the cause in AWS>
1
answers
0
votes
33
views
Mjr
asked 17 days ago
I have a bunch of SQS services & s3 backup services that use a single IP address(NAT). As from today morning, I've lost complete connectivity to any & all aws services. Any TCP connection doesn't proceed beyond the firsy SYN packet. Has anyone ever heard of AWS perm-banning an IP address? I've got a bunch of business critical transactions stuck in SQS queues due to this :( ``` sudo tcpdump -i eth0 host 18.133.45.123 -n & curl -v https://eu-west-2.queue.amazonaws.com/ * Trying 18.133.45.123... * TCP_NODELAY set 16:20:47.610811 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 480045 ecr 0,nop,wscale 7], length 0 16:20:48.611248 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 480296 ecr 0,nop,wscale 7], length 0 16:20:50.627280 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 480800 ecr 0,nop,wscale 7], length 0 16:20:54.851253 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 481856 ecr 0,nop,wscale 7], length 0 16:21:01.934970 IP 197.248.216.154.42816 > 18.133.45.123.443: Flags [S], seq 3361955245, win 29200, options [mss 1460,sackOK,TS val 158275010 ecr 0,nop,wscale 7], length 0 16:21:02.960332 IP 197.248.216.154.42816 > 18.133.45.123.443: Flags [S], seq 3361955245, win 29200, options [mss 1460,sackOK,TS val 158275264 ecr 0,nop,wscale 7], length 0 16:21:03.043229 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 483904 ecr 0,nop,wscale 7], length 0 16:21:04.965428 IP 197.248.216.154.42816 > 18.133.45.123.443: Flags [S], seq 3361955245, win 29200, options [mss 1460,sackOK,TS val 158275768 ecr 0,nop,wscale 7], length 0 16:21:07.625705 IP 197.248.216.154.52394 > 18.133.45.123.443: Flags [S], seq 3840675465, win 29200, options [mss 1460,sackOK,TS val 3898989 ecr 0,nop,wscale 7], length 0 16:21:08.629690 IP 197.248.216.154.52394 > 18.133.45.123.443: Flags [S], seq 3840675465, win 29200, options [mss 1460,sackOK,TS val 3899240 ecr 0,nop,wscale 7], length 0 16:21:09.093703 IP 197.248.216.154.42816 > 18.133.45.123.443: Flags [S], seq 3361955245, win 29200, options [mss 1460,sackOK,TS val 158276800 ecr 0,nop,wscale 7], length 0 16:21:10.645819 IP 197.248.216.154.52394 > 18.133.45.123.443: Flags [S], seq 3840675465, win 29200, options [mss 1460,sackOK,TS val 3899744 ecr 0,nop,wscale 7], length 0 ``` Console is not accessible too ``` sudo tcpdump -i eth0 host 99.83.252.222 -n & curl -v http://console.aws.amazon.com/ * Trying 99.83.252.222... * TCP_NODELAY set 16:21:46.099953 IP 197.248.216.154.36516 > 99.83.252.222.80: Flags [S], seq 773244091, win 29200, options [mss 1460,sackOK,TS val 494668 ecr 0,nop,wscale 7], length 0 16:21:47.107267 IP 197.248.216.154.36516 > 99.83.252.222.80: Flags [S], seq 773244091, win 29200, options [mss 1460,sackOK,TS val 494920 ecr 0,nop,wscale 7], length 0 16:21:49.123236 IP 197.248.216.154.36516 > 99.83.252.222.80: Flags [S], seq 773244091, win 29200, options [mss 1460,sackOK,TS val 495424 ecr 0,nop,wscale 7], length 0 16:21:53.219258 IP 197.248.216.154.36516 > 99.83.252.222.80: Flags [S], seq 773244091, win 29200, options [mss 1460,sackOK,TS val 496448 ecr 0,nop,wscale 7], length 0 ```
1
answers
0
votes
41
views
asked 17 days ago
We need to understand how the SNS message body filter functionality is working. we are having one use case like our project SQS consumer will provide the SNS filter policy after that we need to subscribe to SNS with filter policy to test it. is there any logic available in AWS SDK to test the SNS filter policy without subscribe to SNS. Basically we are expecting is there any functionality available to test the SNS filter policy before subscribe to SNS like regular expression validation.
1
answers
0
votes
51
views
asked 19 days ago
Hi, I'm using the [SQS-Consumer](https://github.com/bbc/sqs-consumer) for getting messages from an AWS SQS (on an EC2 instance I'm running a simple app using `pm2`). Once the message is received, I'm grabbing relevant data and call an external function called `messageHandler` which does some operations on the message (scrape the URL from message using Puppeteer) and then update DB accordingly: ``` const app = Consumer.create({ ... handleMessage: async (message) => { const { id, protocol, hostname, pathname } = JSON.parse(message.Body) as QueueMessageBody; const res = await puppeteerSinglePageCrawl(id, url.protocol, url.hostname, url.pathname, logStreamName); return Promise.resolve(); } ``` My problem is that when a message is read from the queue, sometimes I get Timeout Errors on opening the page with Puppeteer: ``` await this.page.goto(`${protocol}//${hostname}${pathname}`, { waitUntil: 'networkidle2', referer: 'https://www.google.com/', timeout: 20000, }); ``` However, when I connect to my EC2 instance via SSH and call the same function manually `ts-node messageHandler.ts` I'm not getting the timeout error. My initial thought was that the issue might be with the `waitUntil` but clearly when called manually I don't get this error and the page is opening correctly. My second thought was that somehow the network might be overloaded on the EC2 instance when the consumer is running, but I've been testing the manual calls while the consumer was running separately and I still got different (successful) results on manual execution. What might be the reason for this?
0
answers
0
votes
22
views
asked 24 days ago
Hi, I have an SQS-consumer ([https://github.com/bbc/sqs-consumer](lib)) running on an EC2 machine (t2.medium). There is only one instance of this process running at the time. However, when I open my SQS dashboard I see more than 30 messages in flight. It's a FIFO queue, with deduplication based on MessageGroupId. I'm not using any batching the library offers, just simple consumer one message at a time as the library tutorial shows. What am I missing here? The reason for concern is that I'm getting A LOT of timeout errors when processing messages (I'm using Puppeteer to open websites and check for links on them) - and I'm trying to narrow down the cause for this - and overloading the t2 machine network bandwidth might be one of them (I think).
1
answers
0
votes
44
views
asked 25 days ago
I have a django app which uses celery beat to scan the DB and trigger tasks accordingly. I want to deploy this to elastic beanstalk, but simply applying leader_only to my celery beat invocation won't be enough as we need a way to implement protection such that the beat instance is not killed during autoscaling events. So far I've found the following options online Run a separate ec2 that runs celery beat - not ideal but I could make this a cheap instance since the functionality required is so simple and lightweight. I assume that if I point this at an SQS queue and have my workers pulling from that queue everything will work fine. However, it's not clear to me how to have this instance discover the tasks from my Django app short of deploying it again to the second instance and then having that beat instance interact with my queue. Use some sort of leader selection lambda like described here (https://ajbrown.org/2017/02/10/leader-election-with-aws-auto-scaling-groups.html) for my EB autoscaling group. This seems a bit extra complicated, in order to implement this I'm guessing the idea is to have a script in my container commands that checks if it is the leader instance (as assigned by the leader tag in the above tutorial) and only execute celery beat if this is the case. Ditch SQS and use an Elasticache Redis instance as my broker, then install the redbeat scheduler (https://github.com/sibson/redbeat) to prevent multiple instances of a beat service from running. I assume this wouldn't affect the tasks it spawns though correct? My beat tasks spawn several tasks of the same 'type' with different arguments (would like an idiot check on this if possible). My question is, can anyone help me assess the pros and cons of these implementations in terms of cost and functionality? Is there a better, more seamless way to ensure that celery beat simply runs on one instance alone, while my celery workers scale with my autoscaling infrastructure? AWS newbie so would greatly appreciate any help!
0
answers
0
votes
13
views
asked a month ago