Questions tagged with Amazon Simple Queue Service
Content language: English
Sort by most recent
Hi everyone,
We are using JMS to connect to our queues using a .bindings file to configure the jndi context to connect. We are still using non AWS queues, and we are trying to migrate it to AWS SQS.
Is it possible to use a .bindings file to configure the connection to AWS Simple Queue Service configuring the region, using a profile as parameters, all configured inside the .bindings file?
If yes do you have any example on how to do that?
For various reasons, the number of our SQS queues has grown tremendously. We'd like to get a list of all the existing queues. Using AWSCli, we are bounded by the upper limit of 1000 at most. Is there a way we can retrieve a complete list of ALL sqs queues? Also, is there a regex of "negative" match with --queue-name-prefix? Could you please provide some examples? Thank you so much!
I have prepared a SQS queue using the SNS topic for Entitlement Notification for SaaS Contract Integration. Now in the AWS document, it is mentioned that we will receive only Entitlement-Updated message in the queue for any subscription changes by the customer. Basically, as per the document, we need to call the GetEntitlements after that in order to receive the updated entitlements. How do we know for which customer I need to call the GetEntitlement as I only have the Entitlement-Updated message in the queue. Can anybody please provide a sample Entitlement-Updated message which gets received into the SQS?
Hello, I have two SQS queues: one in us-east-1, and one in us-west-1. Currently my Step Function state machine is located in us-west-1, and has no trouble calling SQS API actions on the us-west-1 queue; however, any actions on the us-east-1 queue fail saying the queue cannot be found. Do Step Functions support using a queue outside of the state machine's region? If so, where is the parameter to set the queue region?
Good morning everyone,
I am just starting in the AWS world and I have a challenge that I need to solve with the most appropriate tools that AWS offers me.
The use case is the following: I have to process some pdf documents add some images to them and send it back.
Currently I am doing it with a microservice that receives a pdf and returns it modified.
When I do load tests the queue receives 50 requests and in the bash task I get blocked with 9 pdf at the same time and the ECS crashes.
One solution is to increase the capacity of the ECS so that the microservice can process more documents. But I have read that SQS can help me solve this so I want to be sure I am applying the right architecture:
- I have a .net core microservice in docker that produces requests and sends them to the queue.
- I have an SQS that receives requests and arranges them in order of arrival.
- I have a lambda that listens to the SQS and when a new request arrives it fires the event to the consuming microservice (the lambda "fires" up to 10 times simultaneously and in each "firing" it lets only 1 document through, or they recommend that in each "firing" it lets 10 documents through).
- The consuming microservice receives a message from the lambda and starts processing all the SQS requests until all of them are finished.
- When finished and the SQS is emptied the lambda again is waiting for the SQS to have a new message and the cycle starts again.
Overview:
I have a microservice is publisher.
The microservice is consumer
The lambda is the trigger
The SQS is the queue
I am trying to setup cross account communication from SQS queue to Lambda function. Both these resources are on `eu-central-1` region but in 2 different AWS accounts.
My setup is below
`ACCOUNT_A` has the Lambda function
`ACCOUNT_B` has the SQS queue
I have created IAM role on Account A and it is attached to Lambda function (ACCOUNT_A_LAMBDA_EXECUTION_ROLE). IAM role has attached `AWSLambdaSQSQueueExecutionRole` managed permission
SQS Queue on `ACCOUNT_B` has following access policy
```
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__owner_statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_B:root"
},
"Action": "SQS:*",
"Resource": "arn:aws:sqs:eu-central-1:ACCOUNT_B:"
},
{
"Sid": "__receiver_statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_A:role/LAMBDA_EXECUTION_ROLE"
},
"Action": [
"SQS:ChangeMessageVisibility",
"SQS:DeleteMessage",
"SQS:ReceiveMessage",
"SQS:GetQueueAttributes"
],
"Resource": "arn:aws:sqs:eu-central-1:ACCOUNT_B:"
}
]
}
```
I am using AWS CLI to add Lambda trigger, so that ACCOUNT_B_SQS_QUEUE can be added as a trigger to ACCOUNT_A_LAMBDA_FUNCTION. Following is the AWS CLI command
```
aws lambda create-event-source-mapping --function-name ACCOUNT_A_LAMBDA_FUNCTION --event-source-arn ACCOUNT_B_SQS_QUEUE-arn --profile ACCOUNT_A-aws-profile --region eu-central-1
```
But this command failed with an error
```
An error occurred (InvalidParameterValueException) when calling the CreateEventSourceMapping operation: The provided execution role does not have permissions to call ReceiveMessage on SQS
```
I try to manually add the Lambda trigger as well. It also fails. Appriciate if you can help me with this
I am trying to send messages to an SQS queue. For local environment I am setting up the AmazonSQS object like below
```
amazonSQS =
AmazonSQSClientBuilder.standard()
.withCredentials(
new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretKey)))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpoint, region))
.build();
```
For my development environment I am setting up like below:
```
amazonSQS = AmazonSQSClientBuilder.defaultClient();
```
The tests run fine locally but in my dev environment, I am getting the below error:
```
com.amazonaws.services.sqs.model.AmazonSQSException: Access to the resource https://sqs.us-east-1.amazonaws.com/2092384092384/error-handler-queue is denied. (Service: AmazonSQS; Status Code: 403; Error Code: AccessDenied; Request ID: 7bc7e074-a36a-544d-9489-bbe66133f0a8; Proxy: null)
```
What could be the issue ? Any help is appreciated
1. Two s3 buckets are created in N.Virgenia region.
2. Lambda Function created using Python
3. file data has to be checked in bucket 1 and compare that data with file which is available in bucket 2 if changes found transfer them to bucket 2 from bucket 1 of file
4. if there are no modifications in file or new file added -- lambda should trigger and that new should be transferred to bucket 2
Trying to implementing the above scenario getting below error, Any idea on below error please
Response
{
"errorMessage": "'Record'",
"errorType": "KeyError",
"requestId": "6be014f1-c78c-4a9b-9728-5873b1080812",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 9, in lambda_handler\n file_obj = event[\"Record\"][0][\"s3bucket001forlambda01\"]\n"
]
}
Function Logs
START RequestId: 6be014f1-c78c-4a9b-9728-5873b1080812 Version: $LATEST
Event : {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'}
[ERROR] KeyError: 'Record'
I am trying to integrate our SaaS product with AWS Marketplace. Our product has been published to limited. We have also created a SQS and subscribed the Subscription Notification topic to the SQS. However, after making a test purchase, I am not receiving any subscribe-success message to queue.
Hi all,
I'm getting stuck on a problem of how to properly handle a POST request from a 3rd party (webhook) who expects an immediate 200OK response upon receiving the message or it resends.
I have handled issues like this in the past with the inclusion of a lambda that the API points to pushing the message to a que system (SQS), and returning 200OK once that is complete. While w/e backend processes needed to run, can handle their business in peace. Is there no way, at the API gateway level, or directly in the lambda, to force a status response without waiting for the lambda to return a response when it's completed it's run? Is this just the standard to decouple the response message from the backend?
Hi all,
I'm write a lambda function in Python to create SQS queues when specific events occur via EventBridge. The function is then packaged as a Docker image. When I try to create the queue using the `create_queue` client method
```
import boto3
sqs = boto3.client("sqs")
// sqs = boto3.client("sqs", endpoint_url="https://sqs.us-east-1.amazonaws.com")
sqs.create_queue(QueueName="my-test-queue")
```
I receive either
```
An error occurred (AccessDenied) when calling the CreateQueue operation: Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.
```
or
```
An error occurred (AccessDenied) when calling the CreateQueue operation: Access to the resource https://sqs.amazonaws.com/ is denied.
```
even though the Lambda function has the correct `sqs:CreateQueue` policy attached to its role.
```
{
"Statement": [
{
"Action": [
"sqs:CreateQueue"
],
"Resource": [
"*"
],
"Effect": "Allow"
}
]
}
```
The lambda **IS NOT** attached to any VPC.
I tried to use ZIP based and console-created functions and the error does not occur.
**Does anybody have any idea about why I receive the error when the function is packaged as Docker image?**
Many thanks!
If some messages fail and others succeed in SendMessageBatch, which of the following happens?
* No messages will be retried automatically.
* Only failed messages will be retried automatically.
* All the messages, including the succeeded ones, will be retried automatically.
Assumptions:
- All the attributes of the client API instance are set to the default.
- e.g. `retry_limit: 3`
- No error handling on the client side except the default retry mechanics.