Questions tagged with Amazon Simple Queue Service
Content language: English
Sort by most recent
API Gateway as Reverse HTTP Proxy to SQS
I am trying to use AWS API Gateway as a reverse (forwarding) proxy to AWS SQS. I essentially want to send a REST request to the API Gateway which then gets forwarded directly to the SQS REST API and returns the response. When I send a request to the gateway, I immediately get back ```xml <?xml version="1.0"?> <ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"> <Error> <Type>Sender</Type> <Code>AccessDenied</Code> <Message>Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.</Message> <Detail/> </Error> <RequestId>51c903b2-4da3-5d5e-a3b8-589ee72167de</RequestId> </ErrorResponse> ``` However, when I switch the request URL to SQS directly (`https://sqs.us-east-1.amazonaws.com`) the request succeeds. What am I missing? ```shell curl --request POST 'https://my-api-gateway.com/sqs' \ --header 'X-Amz-Date: <date>' \ --header 'X-Amz-Security-Token: <token>' \ --header 'Authorization: <auth>' \ --header 'Amz-Sdk-Invocation-Id: <invocation>' \ --header 'Amz-Sdk-Request: attempt=1; max=10' \ --header 'User-Agent: aws-sdk-go-v2/1.16.5 os/macos lang/go/1.18.3 md/GOOS/darwin md/GOARCH/arm64 api/sqs/1.18.6' \ --header 'Content-Length: 206' \ --data-urlencode 'Action=ReceiveMessage' \ --data-urlencode 'MaxNumberOfMessages=10' \ --data-urlencode 'QueueUrl=<my-queue-url>' \ --data-urlencode 'Version=2012-11-05' \ --data-urlencode 'WaitTimeSeconds=20' ``` Configuration: 1. [Integrations](https://i.stack.imgur.com/geLqx.png) 2. [Routes](https://i.stack.imgur.com/Lk3QQ.png) 3. [Parameter Mappings](https://i.stack.imgur.com/LtCO4.png)
Multiple SQS messages of the same group in one batch
Hello, I am getting multiple messages from a same SQS FIFO group in a single receive request with MaxNumberOfMessages set to > 2. Is this correct behavior? I don't understand, the API contract says: *When receiving messages from a FIFO queue with multiple message group IDs, Amazon SQS first attempts to return as many messages with the same message group ID as possible. This allows other consumers to process messages with a different message group ID. When you receive a message with a message group ID, **no more messages for the same message group ID are returned unless you delete the message** or it becomes visible.* https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-understanding-logic.html My code dispatches the messages directly to the workers without any group checking logic. It looks like I either need to implement such feature, or simply make all SQS receive requests with maxMessages set to 1 which sounds like a waste of resources. I am confused, because on the very same documentation page that says above, there is this box: *It is possible to receive **up to 10 messages in a single call** using the MaxNumberOfMessages request parameter of the ReceiveMessage action. These messages retain their FIFO order and **can have the same message group ID**.* That breaks the above contract which literally says: "unless you delete the message or it becomes visible". In my testing, I see those messages immediately returned in a single receive call, yet they are set to expire in 30 seconds. I don't understand, is there some kind of log that could tell me why a message was delivered that fast? I am probably missing something in here. Thanks for help
CpuUtilization target for AWS CDK QueueProcessingFargateService
Dear colleagues, I've created SQS processing service using `QueueProcessingFargateService`. I can adjust scaling using `scalingSteps?` based on information from SQS `ApproximateNumberOfMessagesVisible` However, according to what I can see CpuUtilization target is set to 50% by default. My processing is very CPU consuming and it is totally OK to run it on 100% load. Unfortunately I can't find the way for changing CpuUtilization target from 50% to 100%. This cause permanent scale up to maxScalingCapacity. Would appreciate any ideas.
While trying to create an SNS subscription, after choosing a topic, all protocols except SQS disappear
I am attempting to create an SNS subscription to a topic I have just created. In the dropdown next to protocols, I see all of them before choosing a topic. But after I choose the topic, all protocols except SNS disappear from the dropdown. I tried deleting the topic and creating a new one but the same issue persists.
Looking for an easy way to bulk delete SQS queues and SNS topics
When we started out with Rekognition we had a ton of SQS queues as well as SNS topics created automatically. I'm looking for the best and or easiest way to bulk delete these resources. Hoping someone here has an answer or at least some insight.
AWS STS client connection timeout while using AWS JAVA SDK
My use case requires to test connection to different AWS resources like s3 bucket, SNS topic arn and sqs url, while I am assuming a role through STS client. Now, my java application runs fine most of the times but it sometimes runs into STS client time out error and ends the API call. And when i hit the API call again, it will give the output. I wanted to know if there is any way(or any sts property) to set a custom timeout for this purpose. I am using my sts client to assume a role to a 3rd party AWS account and during the testing of the role ARN(which is to be assumed, am I getting this time out error). Please let me know if there is any way around this. Thanks.
Config for lambda internal queue batch size
have a query around lambda trigger notifications: sources i am referring to: 1. https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html 2. https://docs.aws.amazon.com/lambda/latest/dg/with-sqs-example.html Observations: though trigger from S3 has field records a list in the payload but it has only one record while sqs payloads have a no of events as per the batch size Query: Couldn’t find if any aggregation happens in lambda’s internal queue, and in case there is any operation happening in the lambda’s internal queue, how can we config the batch size for the internal queue? Add is the batchSize always 1 in case of S3?
Exploded costs declared as No-Resource - sometimes 10 times higher then the day before
Hi! We use a lot of services such as EC2, Lambda, SQS, CloudWatch, CloudFront etc. Usually we spend about $60-70 per day. But on few days we reach peaks about $600-800 per day! The cost explorer declares it as "No-Resource". It seems it can be SQS, but I cannot explain. We use Laravel-Vapor for managing the lambda environment and its backend sayy samething about 6 Million queues invoked per month. Concerned to aws price list it shouldn't lead to hundreds of dollar even a few dollar. Does anybody know how I can check which service causes this extensive costs? Thank you in advance! Best, Michael
SES Status Notification with SQS FIFO Queue
I used a SQS FIFO queue to buffer email status notifications from SES. To increase the processing speed, I would like to have multiple consumer processing the queue. It would requires me to add a MessageGroupId for each item. Can I check how I can specify the MessageGroupId for the email status?
Amazon Linux 2 on Beanstalk isn't installing SQSD and prevents cron.yml from working
We're on solution stack "64bit Amazon Linux 2 v3.3.13 running PHP 7.4" the worker server is spinning up, unpacking the "platform-engine.zip" and when it comes to setting up SQSD: ``` May 23 12:45:01 ip-172-31-12-195 su: (to sqsd) root on none May 23 12:45:10 ip-172-31-12-195 aws-sqsd-monitor: restarting aws-sqsd... May 23 12:45:10 ip-172-31-12-195 systemd: Starting (null)... May 23 12:45:10 ip-172-31-12-195 su: (to sqsd) root on none May 23 12:45:10 ip-172-31-12-195 systemd: Created slice User Slice of sqsd. May 23 12:45:10 ip-172-31-12-195 systemd: Started Session c2 of user sqsd. May 23 12:45:10 ip-172-31-12-195 aws-sqsd: Version 2 of the Ruby SDK will enter maintenance mode as of November 20, 2020. To continue receiving service updates and new features, please upgrade to Version 3. More information can be found here: https://aws.amazon.com/blogs/developer/deprecation-schedule-for-aws-sdk-for-ruby-v2/ May 23 12:45:13 ip-172-31-12-195 aws-sqsd: Cannot load config file. No such file or directory: "/etc/aws-sqsd.d/default.yaml" - (AWS::EB::SQSD::FatalError) May 23 12:45:13 ip-172-31-12-195 systemd: aws-sqsd.service: control process exited, code=exited status=1 May 23 12:45:13 ip-172-31-12-195 systemd: Failed to start (null). May 23 12:45:13 ip-172-31-12-195 systemd: Unit aws-sqsd.service entered failed state. May 23 12:45:13 ip-172-31-12-195 systemd: aws-sqsd.service failed. May 23 12:45:13 ip-172-31-12-195 systemd: Removed slice User Slice of sqsd. ``` I can't find anything online about this, so some help would be greatly appreciated.