Questions tagged with Amazon Simple Queue Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I currently have an Account A that is calling account B (my current account) and assuming a role that gives it permissions to perform operations on SQS such as encrypt and publish messages. I had to manually modify the SQS permissions to get it to work and allow root access to the current account that the assume role is in. The issue is, this is overly permissive permissions and I do not want root account access to the SQS if I can avoid it. I'm wondering if I can add the policy to only accept the role that's being assumed, opposed to a root user credenetial. Is it possible to add a ROLE in place of the USER or will I need to create another user with the role that's being assumed for this to work? For example, here's my policy ``` { "Version": "2012-10-17", "Id": "Queue1_Policy_UUID", "Statement": [ { "Sid": "Que", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<account_id>:root" }, "Action": "sqs:*", "Resource": "arn:aws:sqs:us-east-1:<account_id>:<service>" } ] } ```
1
answers
0
votes
33
views
Mjr
asked a month ago
Say I have a lambda handler that is able to process sqs queue invocations and also lambda-to-lambda invocations. The lambda has a max concurrency limit of 10. Let's say there is a period of time where the concurrency of the lambda is maxed out due to the high volume of sqs queue messages that are processing. What happens where there is a lambda-to-lambda invocation in the middle of sqs queue messages being processed and maxing out the concurrency limit? Is the AWS CLI invocation handled after all the messages in the queue are processed? Or does the lambda try to process that invocation at the next available instance?
1
answers
0
votes
31
views
alduiin
asked a month ago
1. SNS Topic with "Enable Raw Message Delivery" enabled. 2. Pre-created SQS resource with the (I think) correct policy. 3. Pre-created SNS topic with the (I think) correct policy. 4. Pre-created subscription with SQS/SNS with the (I think) correct policy. My Golang service publishes to SNS topic with one Message Attribute OR use web client to publish to SNS topic with the same Message Attribute. Either way, the end result is the same. Service code (Golang) contains the following block for receiving messages: ` var ( all = "All" ) output, err := c.sqs.ReceiveMessage(&sqs.ReceiveMessageInput{ QueueUrl: &c.QueueURL, MaxNumberOfMessages: &maxMessages, AttributeNames: []*string{aws.String(sqs.MessageSystemAttributeNameApproximateFirstReceiveTimestamp)}, MessageAttributeNames: []*string{&all}, })` If I receive messages on AWS SQS web page then review each message on the web page, I see the Message Attributes on each message. However, if I run my Golang application, the MessageAttributes is always nil. I see the "regular" Attributes but not the Message Attributes. Next, I tried `aws sqs receive-message --queue-url https://sqs.us-east-1.amazonaws.com/my-queue-url --attribute-name SenderId SentTimestamp --message-attribute-name ALL --max-number-of-messages 2`. This too DID NOT have Message Attributes. In both cases, the rest of the data is correct. What would exclude the aws cli and my service from receiving the Message Attributes?
1
answers
0
votes
30
views
asked a month ago
I have configured a queue with a lambda consumer. The Lambda trigger is configured with a filter to process only certain messages from the queue: ``` "FilterCriteria": { "Filters": [ { "Pattern": "{\"body\":{\"action\":[\"sample1\"]}}" } ] } ``` When sending a message matching the filter to the queue, no problem, the message gets comsumed by the lambda function and is removed from the queue. When sending a message not matching the filter `{"action":"testing"}`, the message isn't consumed by the lambda function (this is expected), but the message is deleted from the queue and no more available for any other consumer. This gets even worse when we configure a maxConcurrency for the Lambda function: Lambda will consume some of the message and some messages (matching the filter) won't be consumed and will still be deleted in SQS. Did I stumble upon a bug, or did I miss something in how the filter is supposed to work? Thanks, Daniel
1
answers
1
votes
38
views
profile picture
asked a month ago
Hello, I'm using Prisma Cloud App service intergratino feature to send messages to an SQS I created in AWS. However, the app is not able to call the SQS que. I would like to confirm everything is right on AWS Side. I have created a rule with the necessary permissions and actions to perform on the SQS que for the account to assume using the root principal. I'm wondering if there's some sort of access I must explicitly also allow on the SQS que? Note this is cross account access. The third party app is in another account.The permissions on the SQS is default. The permissions for the assume role is listed below. `{ "Version": "2012-10-17", "Statement": [ { "Action": [ "sqs:GetQueueAttributes", "sqs:ListQueues", "sqs:SendMessage", "tag:GetResources", "iam:GetRole", "kms:GenerateDataKey" ], "Resource": "arn:aws:sqs:us-east-1:<account_of_sqs(current_account)>:prisma-que", "Effect": "Allow" } ] }` And the trust policy `{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<account_id>:root" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "<external_id>" } } } ] }` Is there anything I'm missing?
2
answers
0
votes
35
views
asked a month ago
Error: **botocore.exceptions.ClientError: An error occurred (InvalidArgument) when calling the PutBucketNotificationConfiguration operation** Hello AWS, I am currently working on a project where I am working with a third party team. The team has an SQS that all of our buckets have an event notification for. I currently added a new bucket and I am receiving this error when I try to deploy it via CDK. The team does not seem to be to well familiar with AWS but I asked if I have permissions to call the SQS and they said yes. Is there a way to confirm this on my end? Or is is there documentation on the configuration the team needs to set up for their SQS Que? If so, is there any other problems that could cause this error message? I'm confident it's on the third party team ends because this is done through our CDK stack and everything else works fine. But I do want to know I am updating an existing stack, before our bucket did not send via event notifications but was created. Any solutions or troubleshooting will help. One source I found on stackoverflow except it's for lambda: https://stackoverflow.com/questions/36973134/cant-add-s3-notification-for-lambda-using-boto3
1
answers
0
votes
40
views
asked a month ago
What are the options available for handling messages greater than 256KB on SQS using node.js
1
answers
0
votes
34
views
asked a month ago
Hi Team, In my FIFO SQS queue messages are available but not able to listing received messages after polling in from console? Note: sometimes getting messages and sometimes not ? issue is intermittent. Can somebody help me on the same ,how to debug or anything i have missed, polling setting --> duration -20 sec, count -- 5
1
answers
0
votes
87
views
aj-1993
asked a month ago
FlexMatch sends various notifications described [here](https://docs.aws.amazon.com/gamelift/latest/flexmatchguide/match-events.html). I have a service getting these messages via SQS polling. The event payloads in the messages are serialized to JSON. Are there models defined for these events in the Java AWS SDK 2.0? I sure cannot find them. Are you supposed to roll your own models to deserialize and work with these events?
1
answers
0
votes
30
views
RyanO
asked a month ago
I confirmed that the order is ensured for the same group ID in the FIFO queue. When messages with different group IDs come in, I wonder if the order is also guaranteed for messages with different group IDs. **example** send message order 1. MessageBody: 1, group id:1 2. MessageBody: 2, group id:2 3. MessageBody: 3, group id:3 4. MessageBody: 4, group id:4 Sequence when calling the receive message API 1. MessageBody: 1, group id:1 2. MessageBody: 2, group id:2 3. MessageBody: 3, group id:3 4. MessageBody: 4, group id:4
1
answers
0
votes
42
views
asked a month ago
I have an eventbridge rule with an SQS target, and the lambda function that puts the event on the bus is configured to use xray (traces lead up to eventbridge in xray, so this is working fine). In the SQS messages (received with a ReceiveMessageCommand) there is no AWSTraceHeader attribute, so I cannot continue the trace downstream. I have added an identical rule with lambda target with tracing to test if the trace is propagated correctly, and this is the case, I have a lambda node linked after the events node in the service map. I read that eventbridge should propagate trace headers to SQS targets, mentioned here: https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-eventbridge-now-supports-propagation-of-x-ray-trace-context/?nc1=h_ls Is this actually the case? If so, is there anything I am missing for this to work?
2
answers
0
votes
55
views
asked a month ago
Dear Community, Please imagine the following scenario: * I have multiple long running computation tasks. I'm planning to package them as container images and use ECS Tasks to run them. * I'm planning to have a server less part for administrating the tasks Once a computation tasks starts, it takes the input data from a SQS queue and can start its computation. Also all results end up in an SQS queue for storage. So far, so good. Now the tricky bit: The computation task needs some human input in the middle of its computation, based on intermediate results. Simplified, the task says "I have the intermediate result of 42, should I resume with route A or route B?". Saving the state and resuming in a different container (based on A or B) is not an option, it just takes too long. Instead I would like to have a server less input form, which sends the human input (A or B) to this specific container. What is the best way of doing it? My idea so far: Each container creates his own SQS queue and includes the url in his intermediate result message. But this might result in many queues. Also potentially abandoned queues, should a container crash. There must be a better way to communicate with a single container. I have seen ECS Exec, but this seams more build for debugging purposes.
3
answers
0
votes
42
views
stefan
asked 2 months ago