Questions tagged with Amazon Simple Queue Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I get the following error when I add 3 lines to my policy `Value of property PolicyDocument must be an object ` (Lines with #JustAdd) If I remove those 3 lines it works great whats wrong ? ``` Policies: - PolicyName: !Sub 'X-${AWS::Region}' PolicyDocument: - Effect: Allow Action: 'ssm:GetParametersByPath' Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/X' - Effect: Allow Action: 'ssm:GetParameters' Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/X/*' - Effect: Allow Action: 's3:*' Resource: '*' - Effect: Allow Action: - secretsmanager:GetSecretValue Resource: - !Sub 'arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:C*' - Effect: Allow Action: - 'ec2:DescribeNetworkInterfaces' - 'ec2:CreateNetworkInterface' - 'ec2:DeleteNetworkInterface' - 'ec2:DescribeInstances' - 'ec2:AttachNetworkInterface' Resource: '*' - Effect: Allow Action: 'kms:Decrypt' Resource: '*' - Effect: Allow #JustAdded Action: sqs:* #JustAdded Resource: 'arn:aws:sqs:us-east-1:000000000000:Q.fifo' #JustAdded RoleName: !Sub 'X-${AWS::Region}' ```
1
answers
0
votes
19
views
asked a day ago
Hello AWS team, I have an application that's sending data to the SQS that triggers a lambda function. I noticed alarms indicating I have messages going to my sqs deadletter que. I am trying to determine what's causing the application to fail based on the deadletter que. I'm wodnering if there's a location I can look where I can see the detailed messages and the body to determine why the messages are going to the deadleatter que. I'm not sure if it's client error or server error. I also have alarm for Lambda alert, so I believe it has something to do with server side but I am not show how to troubleshoot or the steps to determine the cause in AWS>
1
answers
0
votes
29
views
Mjr
asked 8 days ago
I have a bunch of SQS services & s3 backup services that use a single IP address(NAT). As from today morning, I've lost complete connectivity to any & all aws services. Any TCP connection doesn't proceed beyond the firsy SYN packet. Has anyone ever heard of AWS perm-banning an IP address? I've got a bunch of business critical transactions stuck in SQS queues due to this :( ``` sudo tcpdump -i eth0 host 18.133.45.123 -n & curl -v https://eu-west-2.queue.amazonaws.com/ * Trying 18.133.45.123... * TCP_NODELAY set 16:20:47.610811 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 480045 ecr 0,nop,wscale 7], length 0 16:20:48.611248 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 480296 ecr 0,nop,wscale 7], length 0 16:20:50.627280 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 480800 ecr 0,nop,wscale 7], length 0 16:20:54.851253 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 481856 ecr 0,nop,wscale 7], length 0 16:21:01.934970 IP 197.248.216.154.42816 > 18.133.45.123.443: Flags [S], seq 3361955245, win 29200, options [mss 1460,sackOK,TS val 158275010 ecr 0,nop,wscale 7], length 0 16:21:02.960332 IP 197.248.216.154.42816 > 18.133.45.123.443: Flags [S], seq 3361955245, win 29200, options [mss 1460,sackOK,TS val 158275264 ecr 0,nop,wscale 7], length 0 16:21:03.043229 IP 197.248.216.154.33256 > 18.133.45.123.443: Flags [S], seq 2128825396, win 29200, options [mss 1460,sackOK,TS val 483904 ecr 0,nop,wscale 7], length 0 16:21:04.965428 IP 197.248.216.154.42816 > 18.133.45.123.443: Flags [S], seq 3361955245, win 29200, options [mss 1460,sackOK,TS val 158275768 ecr 0,nop,wscale 7], length 0 16:21:07.625705 IP 197.248.216.154.52394 > 18.133.45.123.443: Flags [S], seq 3840675465, win 29200, options [mss 1460,sackOK,TS val 3898989 ecr 0,nop,wscale 7], length 0 16:21:08.629690 IP 197.248.216.154.52394 > 18.133.45.123.443: Flags [S], seq 3840675465, win 29200, options [mss 1460,sackOK,TS val 3899240 ecr 0,nop,wscale 7], length 0 16:21:09.093703 IP 197.248.216.154.42816 > 18.133.45.123.443: Flags [S], seq 3361955245, win 29200, options [mss 1460,sackOK,TS val 158276800 ecr 0,nop,wscale 7], length 0 16:21:10.645819 IP 197.248.216.154.52394 > 18.133.45.123.443: Flags [S], seq 3840675465, win 29200, options [mss 1460,sackOK,TS val 3899744 ecr 0,nop,wscale 7], length 0 ``` Console is not accessible too ``` sudo tcpdump -i eth0 host 99.83.252.222 -n & curl -v http://console.aws.amazon.com/ * Trying 99.83.252.222... * TCP_NODELAY set 16:21:46.099953 IP 197.248.216.154.36516 > 99.83.252.222.80: Flags [S], seq 773244091, win 29200, options [mss 1460,sackOK,TS val 494668 ecr 0,nop,wscale 7], length 0 16:21:47.107267 IP 197.248.216.154.36516 > 99.83.252.222.80: Flags [S], seq 773244091, win 29200, options [mss 1460,sackOK,TS val 494920 ecr 0,nop,wscale 7], length 0 16:21:49.123236 IP 197.248.216.154.36516 > 99.83.252.222.80: Flags [S], seq 773244091, win 29200, options [mss 1460,sackOK,TS val 495424 ecr 0,nop,wscale 7], length 0 16:21:53.219258 IP 197.248.216.154.36516 > 99.83.252.222.80: Flags [S], seq 773244091, win 29200, options [mss 1460,sackOK,TS val 496448 ecr 0,nop,wscale 7], length 0 ```
1
answers
0
votes
40
views
asked 9 days ago
We need to understand how the SNS message body filter functionality is working. we are having one use case like our project SQS consumer will provide the SNS filter policy after that we need to subscribe to SNS with filter policy to test it. is there any logic available in AWS SDK to test the SNS filter policy without subscribe to SNS. Basically we are expecting is there any functionality available to test the SNS filter policy before subscribe to SNS like regular expression validation.
1
answers
0
votes
50
views
asked 10 days ago
Hi, I'm using the [SQS-Consumer](https://github.com/bbc/sqs-consumer) for getting messages from an AWS SQS (on an EC2 instance I'm running a simple app using `pm2`). Once the message is received, I'm grabbing relevant data and call an external function called `messageHandler` which does some operations on the message (scrape the URL from message using Puppeteer) and then update DB accordingly: ``` const app = Consumer.create({ ... handleMessage: async (message) => { const { id, protocol, hostname, pathname } = JSON.parse(message.Body) as QueueMessageBody; const res = await puppeteerSinglePageCrawl(id, url.protocol, url.hostname, url.pathname, logStreamName); return Promise.resolve(); } ``` My problem is that when a message is read from the queue, sometimes I get Timeout Errors on opening the page with Puppeteer: ``` await this.page.goto(`${protocol}//${hostname}${pathname}`, { waitUntil: 'networkidle2', referer: 'https://www.google.com/', timeout: 20000, }); ``` However, when I connect to my EC2 instance via SSH and call the same function manually `ts-node messageHandler.ts` I'm not getting the timeout error. My initial thought was that the issue might be with the `waitUntil` but clearly when called manually I don't get this error and the page is opening correctly. My second thought was that somehow the network might be overloaded on the EC2 instance when the consumer is running, but I've been testing the manual calls while the consumer was running separately and I still got different (successful) results on manual execution. What might be the reason for this?
0
answers
0
votes
20
views
asked 15 days ago
Hi, I have an SQS-consumer ([https://github.com/bbc/sqs-consumer](lib)) running on an EC2 machine (t2.medium). There is only one instance of this process running at the time. However, when I open my SQS dashboard I see more than 30 messages in flight. It's a FIFO queue, with deduplication based on MessageGroupId. I'm not using any batching the library offers, just simple consumer one message at a time as the library tutorial shows. What am I missing here? The reason for concern is that I'm getting A LOT of timeout errors when processing messages (I'm using Puppeteer to open websites and check for links on them) - and I'm trying to narrow down the cause for this - and overloading the t2 machine network bandwidth might be one of them (I think).
1
answers
0
votes
41
views
asked 16 days ago
I have a django app which uses celery beat to scan the DB and trigger tasks accordingly. I want to deploy this to elastic beanstalk, but simply applying leader_only to my celery beat invocation won't be enough as we need a way to implement protection such that the beat instance is not killed during autoscaling events. So far I've found the following options online Run a separate ec2 that runs celery beat - not ideal but I could make this a cheap instance since the functionality required is so simple and lightweight. I assume that if I point this at an SQS queue and have my workers pulling from that queue everything will work fine. However, it's not clear to me how to have this instance discover the tasks from my Django app short of deploying it again to the second instance and then having that beat instance interact with my queue. Use some sort of leader selection lambda like described here (https://ajbrown.org/2017/02/10/leader-election-with-aws-auto-scaling-groups.html) for my EB autoscaling group. This seems a bit extra complicated, in order to implement this I'm guessing the idea is to have a script in my container commands that checks if it is the leader instance (as assigned by the leader tag in the above tutorial) and only execute celery beat if this is the case. Ditch SQS and use an Elasticache Redis instance as my broker, then install the redbeat scheduler (https://github.com/sibson/redbeat) to prevent multiple instances of a beat service from running. I assume this wouldn't affect the tasks it spawns though correct? My beat tasks spawn several tasks of the same 'type' with different arguments (would like an idiot check on this if possible). My question is, can anyone help me assess the pros and cons of these implementations in terms of cost and functionality? Is there a better, more seamless way to ensure that celery beat simply runs on one instance alone, while my celery workers scale with my autoscaling infrastructure? AWS newbie so would greatly appreciate any help!
0
answers
0
votes
11
views
asked 19 days ago
I currently have an Account A that is calling account B (my current account) and assuming a role that gives it permissions to perform operations on SQS such as encrypt and publish messages. I had to manually modify the SQS permissions to get it to work and allow root access to the current account that the assume role is in. The issue is, this is overly permissive permissions and I do not want root account access to the SQS if I can avoid it. I'm wondering if I can add the policy to only accept the role that's being assumed, opposed to a root user credenetial. Is it possible to add a ROLE in place of the USER or will I need to create another user with the role that's being assumed for this to work? For example, here's my policy ``` { "Version": "2012-10-17", "Id": "Queue1_Policy_UUID", "Statement": [ { "Sid": "Que", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<account_id>:root" }, "Action": "sqs:*", "Resource": "arn:aws:sqs:us-east-1:<account_id>:<service>" } ] } ```
1
answers
0
votes
32
views
Mjr
asked 22 days ago
Say I have a lambda handler that is able to process sqs queue invocations and also lambda-to-lambda invocations. The lambda has a max concurrency limit of 10. Let's say there is a period of time where the concurrency of the lambda is maxed out due to the high volume of sqs queue messages that are processing. What happens where there is a lambda-to-lambda invocation in the middle of sqs queue messages being processed and maxing out the concurrency limit? Is the AWS CLI invocation handled after all the messages in the queue are processed? Or does the lambda try to process that invocation at the next available instance?
1
answers
0
votes
28
views
alduiin
asked 22 days ago
1. SNS Topic with "Enable Raw Message Delivery" enabled. 2. Pre-created SQS resource with the (I think) correct policy. 3. Pre-created SNS topic with the (I think) correct policy. 4. Pre-created subscription with SQS/SNS with the (I think) correct policy. My Golang service publishes to SNS topic with one Message Attribute OR use web client to publish to SNS topic with the same Message Attribute. Either way, the end result is the same. Service code (Golang) contains the following block for receiving messages: ` var ( all = "All" ) output, err := c.sqs.ReceiveMessage(&sqs.ReceiveMessageInput{ QueueUrl: &c.QueueURL, MaxNumberOfMessages: &maxMessages, AttributeNames: []*string{aws.String(sqs.MessageSystemAttributeNameApproximateFirstReceiveTimestamp)}, MessageAttributeNames: []*string{&all}, })` If I receive messages on AWS SQS web page then review each message on the web page, I see the Message Attributes on each message. However, if I run my Golang application, the MessageAttributes is always nil. I see the "regular" Attributes but not the Message Attributes. Next, I tried `aws sqs receive-message --queue-url https://sqs.us-east-1.amazonaws.com/my-queue-url --attribute-name SenderId SentTimestamp --message-attribute-name ALL --max-number-of-messages 2`. This too DID NOT have Message Attributes. In both cases, the rest of the data is correct. What would exclude the aws cli and my service from receiving the Message Attributes?
1
answers
0
votes
29
views
asked 22 days ago
I have configured a queue with a lambda consumer. The Lambda trigger is configured with a filter to process only certain messages from the queue: ``` "FilterCriteria": { "Filters": [ { "Pattern": "{\"body\":{\"action\":[\"sample1\"]}}" } ] } ``` When sending a message matching the filter to the queue, no problem, the message gets comsumed by the lambda function and is removed from the queue. When sending a message not matching the filter `{"action":"testing"}`, the message isn't consumed by the lambda function (this is expected), but the message is deleted from the queue and no more available for any other consumer. This gets even worse when we configure a maxConcurrency for the Lambda function: Lambda will consume some of the message and some messages (matching the filter) won't be consumed and will still be deleted in SQS. Did I stumble upon a bug, or did I miss something in how the filter is supposed to work? Thanks, Daniel
1
answers
1
votes
26
views
profile picture
asked 23 days ago
Hello, I'm using Prisma Cloud App service intergratino feature to send messages to an SQS I created in AWS. However, the app is not able to call the SQS que. I would like to confirm everything is right on AWS Side. I have created a rule with the necessary permissions and actions to perform on the SQS que for the account to assume using the root principal. I'm wondering if there's some sort of access I must explicitly also allow on the SQS que? Note this is cross account access. The third party app is in another account.The permissions on the SQS is default. The permissions for the assume role is listed below. `{ "Version": "2012-10-17", "Statement": [ { "Action": [ "sqs:GetQueueAttributes", "sqs:ListQueues", "sqs:SendMessage", "tag:GetResources", "iam:GetRole", "kms:GenerateDataKey" ], "Resource": "arn:aws:sqs:us-east-1:<account_of_sqs(current_account)>:prisma-que", "Effect": "Allow" } ] }` And the trust policy `{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<account_id>:root" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "<external_id>" } } } ] }` Is there anything I'm missing?
2
answers
0
votes
33
views
asked 25 days ago