By using AWS re:Post, you agree to the Terms of Use
/Amazon Simple Queue Service/

Questions tagged with Amazon Simple Queue Service

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Lambda Events not triggering EventBridge destination

I am using the Amazon Selling Partner API (SP-API) and am trying to set up a Pub/Sub like system for receiving customer orders etc. The Notifications API in SP-API sends notifications of different types in 2 different ways depending on what event you are using. Some send directly to eventBridge and others are sent to SQS. https://developer-docs.amazon.com/sp-api/docs/notifications-api-v1-use-case-guide#section-notification-workflows I have correctly set up the notifications that are directly sent to eventBridge, but am struggling to work the SQS notifications. I want all notifications to be send to my own endpoint. For the SQS model, I am receiving notifications in SQS, which is set as a trigger for a Lambda function (This part works). The destination for this function is set as another eventBridge (this is that part that doesn't work). This gives the architecture as: `SQS => Lambda => eventBridge => my endpoint` Why is lambda not triggering my eventBridge destination in order to send the notifications? **Execution Role Policies:** * Lambda 1. AWSLambdaBasicExecutionRole 2. AmazonSQSFullAccess 3. AmazonEventBridgeFullAccess 4. AWSLambda_FullAccess * EventBridge 1. Amazon_EventBridge_Invoke_Api_Destination 2. AmazonEventBridgeFullAccess 3. AWSLambda_FullAccess **EventBridge Event Pattern:** `{"source": ["aws.lambda"]}` **Execution Role Trusted Entities:** * EventBridge Role `"Service": [ "events.amazonaws.com", "lambda.amazonaws.com", "sqs.amazonaws.com" ]` * Lambda Role `"Service": [ "lambda.amazonaws.com", "events.amazonaws.com", "sqs.amazonaws.com" ]` **Lambda Code:** ``` exports.handler = function(event, context, callback) { console.log("Received event: ", event); context.callbackWaitForEmptyEventLoop = false callback(null, event); return { statusCode: 200, } } ```
1
answers
0
votes
5
views
asked 15 days ago

Property Topics cannot be empty - Cloudformation CREATE_FAILED on SNS TopicPolicy

I'm trying to build a Cloudformation template using examples I find online. I ran into a dependency issue between S3 and SNS resources which led me to this AWS article: > [How do I avoid the "Unable to validate the following destination configurations" error in AWS CloudFormation](https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/) Using that as an example I created a parameterized S3 bucket name and SNS TopicPolicy, however on creating the Cloudformation stack I see `CREATE_FAILED` for the TopicPolicy with status `Property Topics cannot be empty.` My only attempt at a resolution was to add `DependsOn` to the TopicPolicy, a property that did not appear in the linked article above. My best guess is that `{"Ref": "TransactionUploadTopic"}` in the `Topics` array is not resolving to the ARN for `TransactionUploadTopic`, despite it being successfully created in the CF stack (so I don't know why this would be the case.) My template is below, I'm learning from the [AMediaManager Tutorial](https://aws.amazon.com/blogs/devops/part-1-develop-deploy-and-manage-for-scale-with-elastic-beanstalk-and-cloudformation-series/) ([GitHub Repo](https://github.com/amazon-archives/amediamanager)) and other online resources (since my architecture is quite different from the AMM tutorial.) ``` { "AWSTemplateFormatVersion": "2010-09-09", "Description": "Provision resource dependencies for the app (e.g., RDS, S3, DynamoDB, etc..).", "Parameters": { "AppBucketNameSuffix": { "Description": "The S3 bucket for user uploads", "Type": "String" } }, "Resources": { "InstanceSecurityGroup": { "Type": "AWS::EC2::SecurityGroup", "Properties": { "GroupDescription": "RDS allows ingress from EC2 instances in this group.", "SecurityGroupIngress": [] } }, "TransactionUploadQueue": { "Type": "AWS::SQS::Queue" }, "TransactionUploadTopic": { "Type": "AWS::SNS::Topic", "Properties": { "Subscription": [{ "Endpoint": { "Fn::GetAtt": ["TransactionUploadQueue", "Arn"] }, "Protocol": "sqs" }] } }, "AppBucket2SNSPolicy": { "Type": "AWS::SNS::TopicPolicy", "DependsOn": ["TransactionUploadTopic"], "Properties": { "PolicyDocument": { "Id": "S3NotificationPolicy", "Version": "2012-10-17", "Statement": [ { "Sid": "Statement-id", "Effect": "Allow", "Principal": {"Service": "s3.amazonaws.com"}, "Action": "sns:Publish", "Resource": {"Ref": "TransactionUploadTopic"}, "Condition": { "ArnLike": { "aws:SourceArn": {"Fn::Join": [ "", [ "arn:aws:s3:::", {"Ref": "AWS::StackName"}, "-", {"Ref": "AppBucketNameSuffix"} ]]} } } } ], "Topics": [ {"Ref": "TransactionUploadTopic"} ] } } }, "AppBucket": { "Type": "AWS::S3::Bucket", "DependsOn": ["AppBucket2SNSPolicy"], "Properties": { "BucketName": {"Fn::Join": ["-", [{"Ref": "AWS::StackName"}, {"Ref": "AppBucketNameSuffix"}]]}, "NotificationConfiguration": { "TopicConfigurations": [ { "Event": "s3:ObjectCreated:*", "Topic": {"Ref": "TransactionUploadTopic"} } ] } } }, "TransactionUploadTopic2QueuePolicy": { "Type": "AWS::SQS::QueuePolicy", "Properties": { "Queues": [{ "Ref": "TransactionUploadQueue" }], "PolicyDocument": { "Version": "2012-10-17", "Id": "PublicationPolicy", "Statement": [{ "Sid": "Allow-SNS-SendMessage", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": ["sqs:SendMessage"], "Resource": { "Fn::GetAtt": ["TransactionUploadQueue", "Arn"] }, "Condition": { "ArnEquals": { "aws:SourceArn": { "Ref": "TransactionUploadTopic" } } } }] } } }, "TransactionUploadRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] } ] }, "Path": "/", "Policies": [{ "PolicyName": "TransactionUploadPolicy", "PolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Sid": "1", "Effect": "Allow", "Action": [ "s3:Get*", "s3:ListBucket", "s3:Put*", "s3:*MultipartUpload*" ], "Resource": [{ "Fn::Join": ["", ["arn:aws:s3:::", { "Ref": "AppBucket" }, "/*"]] }, { "Fn::Join": ["", ["arn:aws:s3:::", { "Ref": "AppBucket" }]] }] }, { "Sid": "2", "Effect": "Allow", "Action": "sns:Publish", "Resource": { "Ref": "TransactionUploadTopic" } }, { "Sid": "3", "Effect": "Deny", "Action": [ "sns:*Permission*", "sns:*Delete*", "sns:*Remove*", "s3:*Policy*", "s3:*Delete*" ], "Resource": "*" }] } }] } } }, "Outputs": { "InstanceSecurityGroup": { "Value": {"Ref": "InstanceSecurityGroup"} }, "AppBucket": { "Value": { "Ref" : "AppBucket"} }, "TransactionUploadTopic": { "Value": { "Ref" : "TransactionUploadTopic" } }, "TransactionUploadQueue": { "Value": { "Ref" : "TransactionUploadQueue" } }, "TransactionUploadRoleArn": { "Value": { "Fn::GetAtt": ["TransactionUploadRole", "Arn"]} } } } ```
1
answers
0
votes
6
views
asked a month ago

AWS SDK SQS get number of messages in a dead letter queue

Hello community, I somehow can't find the right information. I have following simple task to solve: create a lambda that checks if a dead letter queue has messages and if it has, read how many. Before I did that I had an alarm set on an SQS metric. I chose the 'ApproximateNumberOfMessagesVisible' metric since 'NumberOfMessagesSent' (which was my first choice) does not work for DLQueues. I have read this article: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html. >The NumberOfMessagesSent and NumberOfMessagesReceived for a dead-letter queue don't match > If you send a message to a dead-letter queue manually, it is captured by the NumberOfMessagesSent metric. However, if a message is sent to a dead-letter queue as a result of a failed processing attempt, it isn't captured by this metric. Thus, it is possible for the values of **NumberOfMessagesSent** and NumberOfMessagesReceived to be different. That is nice to know, but I was missing the information: which metric shall I use if **NumberOfMessagesSent** won't work? I was being pragmatic here so I created an error, a message was sent to the DLQ as a result of a failed processing attempt. Now I looked at the queue in the AWS console under the monitoring-tab and I checked which metric spiked. It was **ApproximateNumberOfMessagesVisible**, which sounded suitable, so I used it. Now I wanted to get alerted more often so I chose to build a lambda function that checks how many messages are in the DLQueue. I use Javascript / Typescript so I found this: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html. Code looked something like this: ``` const params = { QueueUrl: url, AttributeNames: ['ApproximateNumberOfMessagesVisible'] } const resp = SQS.getQueueAttributes(params).promise() ``` It was kind of a bummer that the attribute I wanted was not in there, or better: it was not valid. > Valid Values: All | Policy | VisibilityTimeout | MaximumMessageSize | MessageRetentionPeriod | ApproximateNumberOfMessages | ApproximateNumberOfMessagesNotVisible | CreatedTimestamp | LastModifiedTimestamp | QueueArn | ApproximateNumberOfMessagesDelayed | DelaySeconds | ReceiveMessageWaitTimeSeconds | RedrivePolicy | FifoQueue | ContentBasedDeduplication | ... My first attempt was to use CloudWatch metrics. So I tried this: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/cloudwatch-examples-getting-metrics.html ``` var params = { Dimensions: [ { Name: 'LogGroupName', /* required */ }, ], MetricName: 'IncomingLogEvents', Namespace: 'AWS/Logs' }; cw.listMetrics(params, function(err, data) { if (err) { console.log("Error", err); } else { console.log("Metrics", JSON.stringify(data.Metrics)); } }); ``` but I could not get this working since I did not know what to add to Dimensions / Name to make this working. Please note that I am not working very long with AWS (only 6 months). Maybe I am on a total wrong track. Summarized: I want to achieve that my lambda gets the number of messages in a DLQ. I hope someone can help me Cheers Aleks
1
answers
0
votes
5
views
asked a month ago

synchronous queue implementation on AWS

I have a queue in which producers are adding data and consumers wants to read and process it. In the diagram below producers are adding data in a queue with (Px, Tx, X) example (P3, T3,10) here, P3 is the producer ID, T3 is the number of packets required to process and 10 is data. for (P3, T3,10) consumer needs to read 3 packets from the P3 producer so In the Image below, one of the consumer needs to pick (P3, T3,10), (P3, T3,15) and (P3, T3,5) and perform a function on data that just add all the number that is 10+15+5 = 30 and save 30 to DB. Similarly there is a case for P1 producer (P1,T2,1) and (P1,T2,10) sum = 10+1 = 11 to DB. I have read about AWS Kinesis but it has issues, all consumers read the same data which doesn't fit my case. The major issue is how we can limit consumers for: 1 - Read data queue in synchronous. 2 - If one of the consumers has read (P1, T2,1) then only this consumer can read the next packet from the P1 producer (This point is the major issue for me as the consumer need to add those two number) 3 - This can also cause deadlock as some of the consumers will be forced to read data from a particular producer only because they have already read one packet from the same producer, now they have to wait for the next packet to perform add. I have also read about SQS and MQ but the above challenges still exist for them too. ![Image](https://i.stack.imgur.com/7b3Mm.png) [https://i.stack.imgur.com/7b3Mm.png](https://i.stack.imgur.com/7b3Mm.png) My current approach: for N produces I have started N EC2 instances, producers send data to EC2 through WebSocket (Websocket is not a requirement) and I can process it there easily. As you can see having N EC2 to process N producers will cause budget issues, how can I improve on this solution.
1
answers
0
votes
12
views
asked 2 months ago

AWS::SQS::Queue - template fail to update queue RedriveAllowPolicy

I'm using a YAML template of CloudFormation, to create two queues, one of them is a DLQ. In the DLQ queue I'm attempting to add RedriveAllowPolicy with either of the below json lines, however it fails with the message mentioned below the json examples. Embedding here also the full template (YAML) for reproduce purposes. **FYI,** we're using region us-west-2 (Oregon) **RedriveAllowPolicy JSON examples:** > RedriveAllowPolicy: !Sub '{"redrivePermission":"byQueue","sourceQueueArns":"arn:aws:sqs:${AWS::Region}:${AWS::AccountId}:${EnvironmentType}-${MlResultsQueueName}"}' or > RedriveAllowPolicy: !Join ['',['{"redrivePermission":"byQueue","sourceQueueArns":"', !GetAtt NevaDscvrMlResultsQueue.Arn, '"}']] **CloudFormation stack failure message on queue update** > Resource handler returned message: "Invalid value for the parameter RedriveAllowPolicy. Reason: Amazon SQS can't create the redrive allow policy, as it?s in an unsupported format. (Service: Sqs, Status Code: 400, Request ID: edb2977c-9656-5c9b-a7bf-cdde4dd26492, Extended Request ID: null)" (RequestToken: 76823ad0-286b-ff35-ba3e-de300bac4c88, HandlerErrorCode: GeneralServiceException) **Full template:** AWSTemplateFormatVersion: '2010-09-09' Parameters: EnvironmentType: Type: String Default: dev Description: 'The name of the release environment.' MlResultsQueueName: Description: MlResultsQueue name Type: String Default: 'nevadscvr-ml-results' MlResultsQueueDLQName: Description: MlResultsQueueDLQ name Type: String Default: 'nevadscvr-ml-results-DLQ' Resources: NevaDscvrMlResultsQueue: Type: AWS::SQS::Queue Properties: QueueName: !Sub '${EnvironmentType}-${MlResultsQueueName}' VisibilityTimeout: 60 MessageRetentionPeriod: 345600 DelaySeconds: 0 MaximumMessageSize: 262144 ReceiveMessageWaitTimeSeconds: 20 RedrivePolicy: deadLetterTargetArn: !Sub 'arn:aws:sqs:${AWS::Region}:${AWS::AccountId}:${EnvironmentType}-${MlResultsQueueDLQName}' maxReceiveCount: 3 NevaDscvrMlResultsQueueDLQ: Type: AWS::SQS::Queue Properties: QueueName: !Sub '${EnvironmentType}-${MlResultsQueueDLQName}' VisibilityTimeout: 30 MessageRetentionPeriod: 345600 DelaySeconds: 0 MaximumMessageSize: 262144 ReceiveMessageWaitTimeSeconds: 0 RedriveAllowPolicy: !Sub '{"redrivePermission":"byQueue","sourceQueueArns":"arn:aws:sqs:${AWS::Region}:${AWS::AccountId}:${EnvironmentType}-${MlResultsQueueName}"}'
1
answers
0
votes
22
views
asked 2 months ago

Why am I getting SQS messages with different MessageGroupId's in a single boto3 receive_messages call?

I understand from the docs that if I am setting `MessageGroupId` when sending messages to a SQS FIFO, any and all messages returned in a single `queue.receive_messages()`boto3 call should have the same `MessageGroupId`. But I am getting messages with different `MessageGroupId` values in a single `queue.receive_messages()` call. Here is how I am calling `send_messages()` using the python `boto3` lib: ``` client = boto3.resource( "sqs", region_name=utils.AWS_SQS_REGION, aws_access_key_id=utils.AWS_SQS_ACCESS_KEY, aws_secret_access_key=utils.AWS_SQS_SECRET_KEY, ) queue = client.get_queue_by_name(QueueName=aws_queue_name) for chunk in chunked(queue_bulk_kwargs, 10): # Amazon SQS batch size limit is 10 Entries = [ { "Id": str(uuid.uuid4()), "MessageBody": json.dumps(args, cls=EnhancedJSONEncoder), "MessageGroupId": str(args["calsync_account_id"]), "MessageDeduplicationId": str(uuid.uuid4()).replace("-", ""), } for args in chunk ] response = queue.send_messages(Entries=Entries) ``` Here is how I am calling `receive_messages()`: ``` client = boto3.resource( "sqs", region_name=utils.AWS_SQS_REGION, aws_access_key_id=utils.AWS_SQS_ACCESS_KEY, aws_secret_access_key=utils.AWS_SQS_SECRET_KEY, ) self._queue = client.Queue(QueueURL) messages = self._queue.receive_messages( MaxNumberOfMessages=self.AWS_SQS_QUEUE_CHUNK_SIZE, AttributeNames=["MessageGroupId"] ) ``` Have I misunderstood how `MessageGroupId` works? Or is there another way to send and/or receive messages to only receive messages from one `MessageGroupId` at a time? Otherwise I have to manage a local ordered message cache which is the same as implementing a local queue and defeats the purpose of using SQS in the first place. Thanks!
1
answers
0
votes
5
views
asked 3 months ago

CloudFormation Nonsense: Seven (7) SNS Subscriptions created for SQS Queues by a Template

Greetings!, I have a nonsense CloudFormation behaviour, the problem is in the section where i create an SNS Topic and two SQS Queues in a typical Fan-out pattern: ``` Resources: ForwardMessageQueue: Type: AWS::SQS::Queue Properties: QueueName: !Sub 'ForwardMessageQueue${EnvironmentPrm}' Tags: - Key: 'Environment' Value: !Ref EnvironmentPrm AuditMessageQueue: Type: AWS::SQS::Queue Properties: QueueName: !Sub 'AuditMessageQueue${EnvironmentPrm}' Tags: - Key: 'Environment' Value: !Ref EnvironmentPrm CallbackNotificationTopic: Type: AWS::SNS::Topic Properties: TopicName: !Sub 'CallbackNotificationTopic${EnvironmentPrm}' Subscription: - Endpoint: !GetAtt ForwardMessageQueue.Arn Protocol: "sqs" - Endpoint: !GetAtt AuditMessageQueue.Arn Protocol: "sqs" Tags: - Key: 'Environment' Value: !Ref EnvironmentPrm ``` The template executes successfully and the solution once deployed, does what it has todo. The problem is when I checkout the configuration of the Queues through the AWS Web Console, I find seven SNS Subscriptions like: | Subscription ARN | Topic ARN | | --- | --- | | arn:aws:sns:us-east-1:012345678901:CallbackNotificationTopicDev:cef23e99-12c6-4120-9219-bb4b7ac05ef4 | arn:aws:sns:us-east-1:012345678901:CallbackNotificationTopicDev | | arn:aws:sns:us-east-1:012345678901:CallbackNotificationTopicDev:afdfc906-5b0d-41f0-9185-572b51fecaf5 | arn:aws:sns:us-east-1:012345678901:CallbackNotificationTopicDev | | arn:aws:sns:us-east-1:012345678901:CallbackNotificationTopicDev:ab1ebde3-91b9-4567-a2b4-9613afefbbd7 | arn:aws:sns:us-east-1:012345678901:CallbackNotificationTopicDev | | arn:aws:sns:us-east-1:012345678901:CallbackNotificationTopicDev:930edf3c-ccb4-4184-b263-32ee64136387 | arn:aws:sns:us-east-1:012345678901:CallbackNotificationTopicDev | ... and other 3 rows more, seven rows in total. Any idea of the cause of this misbehavior? Thanks in advanced!.
1
answers
0
votes
7
views
asked 3 months ago

How to link AWS SQS queue to push for API integration and IoT.

Hi everyone, I am trying to link a switchboard that pushes the data gathered to the AWS SQS, to an API platform from where the users will be able to monitor the data in real-time (e.g. digital twin). Previously, I was able to link these devices via the MQTT Broker with RabbitMQ MQTT broker installed. Now, I am trying to do the same with the switchboard linked as above and edit the script accordingly for this case study. I appreciate your advice and help! Thanks in advance! the previous script was as follow: 1. Create the container: sudo docker run --hostname navvis-rabbitmq --name navvis-rabbitmq -p 5672:5672 -p 15672:15672 -p 1883:1883 -p 15675:15675 -td rabbitmq:3 Copy 2. Enter the container terminal: sudo docker exec -it navvis-rabbitmq bash Copy 3. Enable required plugins: rabbitmq-plugins enable rabbitmq_management rabbitmq-plugins enable rabbitmq_web_mqtt Copy 4. Exit the container terminal: exit Copy 5. Create a new file named rabbitmq.conf. loopback_users.guest = false listeners.tcp.default = 5672 management.tcp.port = 15672 mqtt.listeners.tcp.default = 1883 ## Default MQTT with TLS port is 8883 # mqtt.listeners.ssl.default = 8883 # anonymous connections, if allowed, will use the default # credentials specified here mqtt.allow_anonymous = true mqtt.default_user = guest mqtt.default_pass = guest mqtt.vhost = / mqtt.exchange = amq.topic # 24 hours by default mqtt.subscription_ttl = 86400000 mqtt.prefetch = 10 Copy 6. Replace the config file in the container with the new config file: sudo docker cp rabbitmq.conf navvis-rabbitmq:/etc/rabbitmq/ Copy
0
answers
0
votes
7
views
asked 4 months ago
1
answers
0
votes
18
views
asked 4 months ago

InvalidClientTokenId sending message to SQS that works for SES

I'm having trouble sending a message to a new SQS queue and receive this error: [Type] => Sender [Code] => InvalidClientTokenId [Message] => The AWS Access Key Id you provided does not exist in our records. Any suggestions on why SQS is not recognizing the Key ID for SQS SendMessage, but does accept it for SES calls? - Key Id is the identical key used for successfully sending SES mail - Same Elasticbeanstalk instance, application, AWS SDK. - PHP 7.4 64 bit - Elasticbeanstalk instance on Amazon Linux 2/3.3.9 - AWS SDK 1.5.14 (tried 3.x, same results) Php code: require_once(<path>/aws-sdk/sdk.class.php'); require_once(<path>/aws-sdk/services/sqs.class.php'); $options = array('key' => 'AKIAblahblahblah','secret' => 'blahblahblahblahblahblahblahblahblahblahblahblah'); $sqs = new AmazonSQS($options); $sqs->set_region('sqs.us-east-2.amazonaws.com'); $sqs_queue = 'https://sqs.us-east-2.amazonaws.com/111112345678/my-app-sa'; $message = 'test'; $r = $sqs->send_message($sqs_queue, $message); Elasticbeanstalk: IAM instance profile: aws-elasticbeanstalk-ec2-role Service role: arn:aws:iam::111112345678:role/aws-elasticbeanstalk-service-role IAM User: Name=my-app-sa User ARN=arn:aws:iam::111112345678:user/my-app-sa Permissions: Policy=AmazonSQSFullAccess Created AccessKey: keyID=AKIAblahblahblah SQS Queue: Name=my-sqs-queue Access Policy: { "Version": "2008-10-17", "Id": "__default_policy_ID", "Statement": [ { "Sid": "__owner_statement", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111112345678:root" }, "Action": "SQS:*", "Resource": "arn:aws:sqs:us-east-2:111112345678:my-sqs-queue" }, { "Sid": "__sender_statement", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111112345678:role/aws-elasticbeanstalk-ec2-role", "arn:aws:iam::111112345678:user/my-app-sa", "arn:aws:iam::111112345678:role/aws-elasticbeanstalk-service-role" ] }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:us-east-2:111112345678:my-sqs-queue" }, { "Sid": "__receiver_statement", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111112345678:role/aws-elasticbeanstalk-ec2-role", "arn:aws:iam::111112345678:user/my-app-sa", "arn:aws:iam::111112345678:role/aws-elasticbeanstalk-service-role" ] }, "Action": [ "SQS:ChangeMessageVisibility", "SQS:DeleteMessage", "SQS:ReceiveMessage" ], "Resource": "arn:aws:sqs:us-east-2:111112345678:my-sqs-queue" } ] }
2
answers
0
votes
12
views
asked 4 months ago

CloudFormation breaks on AWS::SQS::Queue with RedriveAllowPolicy property

We are specifying a RedriveAllowPolicy on our AWS::SQS::Queue in CloudFormation and are - again - receiving errors in CloudFormation without making any changes to our templates. This happened a few weeks ago, too, so it is the second breaking change for this property we're seeing, which is unfortunate. The old thread was: https://forums.aws.amazon.com/thread.jspa?messageID=1000934&tstart=0 So, in accordance to that thread, we changed our template definition to be: ``` TestQueue: Type: AWS::SQS::Queue Properties: VisibilityTimeout: 450 RedriveAllowPolicy: '{"redrivePermission":"denyAll"}' RedrivePolicy: deadLetterTargetArn: !GetAtt TestDeadLetterQueue.Arn maxReceiveCount: 5 TestDeadLetterQueue: Type: AWS::SQS::Queue Properties: MessageRetentionPeriod: 1209600 ``` This worked for a few weeks, but now CloudFormation is throwing the following error for this exact template: > 2021-12-14 10:33:14 UTC+0100 TestQueue CREATE_FAILED > > Properties validation failed for resource TestQueue with message: #: extraneous key [RedriveAllowPolicy] is not permitted Removing ` RedriveAllowPolicy: '{"redrivePermission":"denyAll"}'` from the template solves the issue - but we want to set this policy, obviously. I hope we're following the documentation at https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html#aws-sqs-queue-redriveallowpolicy precisely. Any help appreciated. This is quite a big blocker in our process right now. Full template file to reproduce the error: ``` AWSTemplateFormatVersion: '2010-09-09' Description: A prototype stack to test out CloudFormation definitions. Metadata: {} Transform: AWS::Serverless-2016-10-31 Resources: TestQueue: Type: AWS::SQS::Queue Properties: VisibilityTimeout: 450 RedriveAllowPolicy: '{"redrivePermission":"denyAll"}' RedrivePolicy: deadLetterTargetArn: !GetAtt TestDeadLetterQueue.Arn maxReceiveCount: 5 TestDeadLetterQueue: Type: AWS::SQS::Queue Properties: MessageRetentionPeriod: 1209600 ```
1
answers
0
votes
38
views
asked 5 months ago

AWS SQS ActiveJob job_data argument is not being received by the ActiveJob

I have a rails app and I am struggling to integrate Aws::Rails::SqsActiveJob so that I can pull events from an AWS SQS Queue. I keep getting the error 'wrong number of arguments (given 0, expected 1)' when ActiveJob.perform_later is called. Is this a serialization issue? Is the message not being serialized for ActiveJob.perform_later? Do I need a custom serializer? Here are some of the things I have tried: - various function argument signatures such as a keyword job_data: - inheriting from ActiveJob::Base and ApplicationJob - using :amazon_sqs, :amazon_sqs_async and :shoryuken for config.active_job.queue_adapter - using both FIFO and standard SQS queues Using these gems: gem 'aws-sdk-rails' ruby '2.6.6' gem 'rails', '~> 5.2.3' Here is the activejob class: class CartsUpdateJob < ApplicationJob queue_as :default def perform(job_data) Rails.logger.info "data: " + job_data.inspect end end I know that the message payload is arriving from the SQS queue successfully because the event's json hash value of \\['job_class'\] is being received by the JobRunner since JobRunner knows which ActiveJob class to use to invoke perform(): # File 'lib/aws/rails/sqs_active_job/job_runner.rb', line 10 def initialize(message) @job_data = Aws::Json.load(message.data.body) @class_name = @job_data\\['job_class'\].constantize @id = @job_data\\['job_id'\] end and it looks like that same payload @job_data json hash is then being sent as the argument to the perform function: # File 'lib/aws/rails/sqs_active_job/job_runner.rb', line 16 def run ActiveJob::Base.execute @job_data end However I keep getting the error 'wrong number of arguments (given 0, expected 1)'. \[Aws::SQS::Client 200 7.777424 0 retries] receive_message(wait_time_seconds:20,max_number_of_messages:1,visibility_timeout:120,attribute_names:\["All"],message_attribute_names:\["All"],queue_url:"https://sqs.us-west-2.amazonaws.com/1111111111111/theapp-dev-queue.fifo") \\[ActiveJob\] \\[CheckoutsUpdateJob\] Performing CheckoutsUpdateJob (Job ID: ) from AmazonSqsAsync() \\[ActiveJob\] \\[CheckoutsUpdateJob\] Error performing CheckoutsUpdateJob (Job ID: ) from AmazonSqsAsync() in 33.78ms: ArgumentError (wrong number of arguments (given 0, expected 1)): C:/Users/KG/theapp/app/jobs/checkouts_update_job.rb:2:in perform' C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/execution.rb:39:in block in perform_now' ... C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/activejob-5.2.5/lib/active_job/execution.rb:22:in execute' C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/aws-sdk-rails-3.6.0/lib/aws/rails/sqs_active_job/job_runner.rb:17:in run' C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/aws-sdk-rails-3.6.0/lib/aws/rails/sqs_active_job/executor.rb:30:in block in execute' C:/Ruby26-x64/lib/ruby/gems/2.6.0/gems/concurrent-ruby-1.1.8/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:363:in run_task' The test message is 20.56KB and here are some pertinent parts of the json that are taken from the SQS 'send and receive messages' utility: { "version": "0", ... "region": "us-west-2", "resources": \[], "detail": { "metadata": { ... } }, "webhook": { "line_items": \[ ... ], "note": null, "updated_at": "2021-04-16T00:03:52.020Z", "created_at": "2021-04-10T00:05:43.173Z" }, "job_class": "CartsUpdateJob" } I am able to simulate a successful ActiveJob handoff and processing with my business logic added back into the ActiveJob CartsUpdateJob perform_now() function and using the same test event via the console: client = Aws::SQS::Client.new queue_url = client.get_queue_url(queue_name: "theapp-dev-aws-queue") resp = client.receive_message(queue_url: queue_url.queue_url) job_data = JSON.parse(resp.messages\[0].body) CartsUpdateJob.perform_now(job_data) but when I run that same code in the console with .perform_later: Enqueued CartsUpdateJob (Job ID: e84635b8-75bd-462b-bd83-4606fb9cfa54) to AmazonSqs(default) with arguments: ... I can see the same argument error in the worker process output: 8:18:51 PM aws.1 | Running job: e84635b8-75bd-462b-bd83-4606fb9cfa54\\[CartsUpdateJob\] 8:18:51 PM aws.1 | Running job: \\[CartsUpdateJob\] 8:18:51 PM aws.1 | Error processing job \\[CartsUpdateJob\]: wrong number of arguments (given 0, expected 1) 8:18:51 PM aws.1 | C:/Users/KG/theapp/app/jobs/carts_update_job.rb:5:in `perform' I would greatly appreciate some help. Thank you.
1
answers
0
votes
0
views
asked a year ago

ChangeMessageVisibility - existing receipt handle reported as expired

In my JMS listener connected to an SQS queue I call setVisibilityTimeout to extend the timeout and allow for a long-running to finish as suggested in https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html#configuring-visibility-timeout. Unfortunately, most of the time I get an error response (Reason: The receipt handle has expired). visibilityTimeout for the queue is set to 300s and extending the visibility is the first thing that is done after receiving a message so there is no way that the handle is already expired. What is more interesting, after receiving such an error response, JMS calls setVisibilityTimeout again on the same receipt handle to change the timeout to 0 and make the message available for other workers to process. This second call is successful. It looks like the seemingly expired message handle is working again after just few milliseconds. I turned on debug logs for com.amazonaws.request and it can be clearly seen below: DEBUG 2020-08-12T17:40:19,981 \\[ConsumerPrefetchThread-1\] \\[\] \\[com.amazonaws.request\] "Sending Request: POST https://sqs.eu-central-1.amazonaws.com /************/queue.fifo Parameters: ({"Action":\\["ReceiveMessage"\],"Version":\\["2012-11-05"\],"AttributeName.1":\\["All"\],"MessageAttributeName.1":\\["All"\],"MessageAttributeName.2":\\["SQSLargePayloadSize"\],"MaxNumberOfMessages":\\["1"\],"WaitTimeSeconds":\\["20"\],"ReceiveRequestAttemptId":\\["0bbb2c42-94d5-4a0b-8aca-4f2dce9058f2"\]}Headers: (User-Agent: aws-sdk-java/1.11.228 Linux/5.4.0-42-generic OpenJDK_64-Bit_Server_VM/11.0.8+10-LTS java/11.0.8 /SQS Java Messaging Client v1.0 AmazonSQSExtendedClient/1.11.228, amz-sdk-invocation-id: 7593448a-2867-a2e7-907b-d893015d58cf, ) " DEBUG 2020-08-12T17:40:23,708 \\[asyncExecutor-1\] \\[e76d55a4-0664-43c1-9c0c-b1458df62b4f\] \\[com.amazonaws.request\] "Sending Request: POST https://sqs.eu-central-1.amazonaws.com /************/queue.fifo Parameters: ({"Action":\\["ChangeMessageVisibility"\],"Version":\\["2012-11-05"\],"ReceiptHandle":\\["AQEBGqvUSuWJKtF4l1k9G2cC46jhcQQsLZJlQNCxIU2m8c_Iqo5fii/0u8W4tnk58golmCN0OynwsOFCEWw3pjs_fHL8/81X81360H/2qQB8/SQqoe6qCfXbxD8/9ATRP32Go3DHpMy93nb9RQ2_1Wqe9nLoxV2iIZxLChlhLcIYs6Le2z6gSXcfQykoj9N17sAY0lQ6d7qzZ3S3mmsc5QFlSlxY4_KrQp93guMbIS51gbyNrj_F_N2e9nM4lf5LWrelE8BhL6WDwIBtSGNk_DN4eLI8shTX1L1T/1DDpErysBblDFtJQ8iQxWC5ayufedCD"\],"VisibilityTimeout":\\["600"\]}Headers: (User-Agent: aws-sdk-java/1.11.228 Linux/5.4.0-42-generic OpenJDK_64-Bit_Server_VM/11.0.8_10-LTS java/11.0.8, amz-sdk-invocation-id: ca2d4cbb-f6b4-e896-d7b2-5d41d01a5eef, ) " EBUG 2020-08-12T17:40:23,764 \\[asyncExecutor-1\] \\[e76d55a4-0664-43c1-9c0c-b1458df62b4f\] \\[com.amazonaws.request\] "Received error response: com.amazonaws.services.sqs.model.AmazonSQSException: Value AQEBGqvUSuWJKtF4l1k9G2cC46jhcQQsLZJlQNCxIU2m8c_Iqo5fii/0u8W4tnk58golmCN0OynwsOFCEWw3pjs_fHL8/81X81360H/2qQB8/SQqoe6qCfXbxD8/9ATRP32Go3DHpMy93nb9RQ2_1Wqe9nLoxV2iIZxLChlhLcIYs6Le2z6gSXcfQykoj9N17sAY0lQ6d7qzZ3S3mmsc5QFlSlxY4_KrQp93guMbIS51gbyNrj_F_N2e9nM4lf5LWrelE8BhL6WDwIBtSGNk+DN4eLI8shTX1L1T/1DDpErysBblDFtJQ8iQxWC5ayufedCD for parameter ReceiptHandle is invalid. Reason: The receipt handle has expired. (Service: AmazonSQS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 560fb9f9-f79c-5f44-a21f-ee57dc5b5832)" DEBUG 2020-08-12T17:40:23,766 \\[asyncExecutor-1\] \\[\] \\[com.amazonaws.request\] "Sending Request: POST https://sqs.eu-central-1.amazonaws.com /************/queue.fifo Parameters: ({"Action":\\["ChangeMessageVisibilityBatch"\],"Version":\\["2012-11-05"\],"ChangeMessageVisibilityBatchRequestEntry.1.Id":\\["0"\],"ChangeMessageVisibilityBatchRequestEntry.1.ReceiptHandle":\\["AQEBGqvUSuWJKtF4l1k9G2cC46jhcQQsLZJlQNCxIU2m8c_Iqo5fii/0u8W4tnk58golmCN0OynwsOFCEWw3pjs_fHL8/81X81360H/2qQB8/SQqoe6qCfXbxD8/9ATRP32Go3DHpMy93nb9RQ2_1Wqe9nLoxV2iIZxLChlhLcIYs6Le2z6gSXcfQykoj9N17sAY0lQ6d7qzZ3S3mmsc5QFlSlxY4_KrQp93guMbIS51gbyNrj_F_N2e9nM4lf5LWrelE8BhL6WDwIBtSGNk_DN4eLI8shTX1L1T/1DDpErysBblDFtJQ8iQxWC5ayufedCD"\],"ChangeMessageVisibilityBatchRequestEntry.1.VisibilityTimeout":\\["0"\]}Headers: (User-Agent: aws-sdk-java/1.11.228 Linux/5.4.0-42-generic OpenJDK_64-Bit_Server_VM/11.0.8_10-LTS java/11.0.8 /SQS Java Messaging Client v1.0, amz-sdk-invocation-id: 9ddb4853-99e1-bda6-218e-e6646ec23a61, ) " DEBUG 2020-08-12T17:40:23,788 \\[StartThread\] \\[e76d55a4-0664-43c1-9c0c-b1458df62b4f\] \\[com.amazonaws.request\] "Received successful response: 200, AWS Request ID: 532e3d49-5764-5176-b2fa-6ae77ca370e5" Did anyone encounter similar problems? For me it seems like the SQS API is quite unreliable even with a really small load (less than one message per second).
2
answers
0
votes
16
views
asked 2 years ago
1
answers
0
votes
0
views
asked 2 years ago
1
answers
0
votes
23
views
asked 2 years ago

AWS setting how many long-polling concurrent requests Node.js can handle?

I'm setting up a new application for long-polling messages with interval of 10 sec from AWS sqs. I've tried to test it. And after 50 users that waiting their requests latency start growing and reach 15 seconds and reached 30 second with 150 users. Is it something wrong with my code or aws/node have some type of setting for it? ``` const port = process.env.PORT || 3001; const express = require('express'); const app = express(); const AWS = require('aws-sdk'); AWS.config.update({region: 'eu-west-1'}); const MD5 = function(d){<md5function>} const sleep = (waitTimeInMs) => new Promise(resolve => setTimeout(resolve, waitTimeInMs)); const SQS = new AWS.SQS({ region: 'eu-west-1' }); const LONG_POLL_TIMEOUT = 10; async function checkQueue(req, res) { const {version, token} = req.params; const auth = req.query.auth; if (!isTokenValid(token, auth)) { await sleep(LONG_POLL_TIMEOUT * 1000); res.send() } else { getUpdateMessage(version, token, res); } } function getUpdateMessage(version, token, res) { const urlParams = { QueueName: `_version-queue-${version}-${token}` }; SQS.getQueueUrl(urlParams, (urlErr, urlData) => { if (urlErr) { res.status(204).send(); } else { const messageParams = { QueueUrl: urlData.QueueUrl, WaitTimeSeconds: LONG_POLL_TIMEOUT, }; SQS.receiveMessage(messageParams, (err, data) => { if (err) { res.status(204).send(); } else { if (data.Messages) { res.send(data.Messages[0].Body); SQS.deleteMessage({ QueueUrl: urlData.QueueUrl, ReceiptHandle: data.Messages[0].ReceiptHandle }, (err1, data) => { if (err1) { } }); } else { res.send(); } } }); } }); } function isTokenValid(token, auth) { // check against tokens for last 14 days let dayNumber = Math.ceil(Date.now() / (24 * 3600 * 1000)); for (let i = 0; i < 14; i++) { const stringToHash = `<string>`; if (MD5(stringToHash) == auth) { return true; } dayNumber--; } return false; } app.use(function(req, res, next) { res.header("Access-Control-Allow-Origin", "*"); next(); }); app.get('/versions/:version/long_poll_updates/:token', function (req, res) { checkQueue(req, res); }); app.get('/check', function (req, res) { res.send('I\'m ok!'); }); app.use((req, res) => { res.status(404).send("Sorry, that route doesn't exist. Have a nice day :)"); }); app.listen(port, () => { console.log('Server running at http://127.0.0.1:' + port + '/'); }); ``` CPU Utilisation was less then 10 percent.
2
answers
0
votes
24
views
asked 3 years ago

SQS long polling: messages delayed

Hi, I use SQS standard queues with long polling via a Python client. Recently I've been seeing delays between when the message is successfully sent to the queue and when the message is received by the consumer, where the delay is approximately equal to the visibility timeout. Please see the simple example below, keeping in mind the following: • There are no other consumers for this queue in this example • When a message is ultimately received with delay, ApproximateReceiveCount=2 and ApproximateFirstReceiveTimestamp ~=SentTimestamp • Debug-level logs in boto3 do not indicate that the delayed message was ever received in any way other than in the ultimate delayed receipt observed by my code • A delay will be observed in ~30% of runs, and is always for one of the first few messages sent (have never seen it later than the 5th message after running for a long time) ``` #!/usr/bin/env python3 import os import sys import time import boto3 import logging import threading # Logging setup omitted times = {} prefix = os.urandom(4).hex() + "_" def sender(client, url): # Send a message every 2 seconds, recording the send time per message global times i = 0 while True: body = prefix + str(i) times[body] = time.monotonic() client.send_message(QueueUrl=url, MessageBody=body) i += 1 time.sleep(2) def recver(client ,url): # Loop over receive_message forever on a long poll global times, total while True: resp = client.receive_message(QueueUrl=url, MaxNumberOfMessages=10, VisibilityTimeout=5, WaitTimeSeconds=20, AttributeNames=["All"]) print("Got resp") for msg in resp["Messages"]: # Delete the received messages print(f"Msg attrs: {msg.get('Attributes')}") handle = msg["ReceiptHandle"] client.delete_message(QueueUrl=url, ReceiptHandle=handle) # See how long it took using the time set in the sender body = msg["Body"] try: start = times.pop(body) except KeyError: print(f"ERROR: key missing: {body}") continue delta = time.monotonic() - start print(f"{body} took {delta}") if delta > 1: print(f"LONG DELAY: body={body} delay={delta}") os._exit(1) if __name__ == "__main__": # Start up a sender and receiver thread cli = boto3.client("sqs", region_name='us-east-1') url = cli.get_queue_url(QueueName="test_py")["QueueUrl"] threading.Thread(target=recver, args=(cli, url), daemon=True).start() threading.Thread(target=sender, args=(cli, url), daemon=True).start() time.sleep(24 * 60 * 60) ``` Please let me know if I can provide any more info. Thanks in advance for your help!
2
answers
0
votes
9
views
asked 3 years ago

Guarantee response to specific node? (Microservices intercommunication)

Hi I'm trying here to solve the problem of guaranteed response to the specific service node with SQS intercommunication, maybe some of you can hint me with solution Description: Imagine we have an API Gateway service which is scaled with multiple nodes under a load balancer. During the REST request from the front-end, one of the random API Gateway Node (AGN in future) is hit with GET request. To fulfil this request AGN needs to request additional information from one of the internal microservices. So AGN drops the request to the Q of the specific microservice and requires a response in return. Obviously, whole intercommunication is stateless, all services are scaled and any random node of this microservice can read request from the Q. However initial AGN still holds an active HTTP connection which requires a response so microservice will drop its response to the API getaway Q, where potentially any of API Gateway node can pick it up, but we require this response to be delivered to the specific node, which still has an active connection from the front-end and waiting for response. See the attached diagram: https://www.screencast.com/t/H1B1eLJex Question: How do I guaranty that only a specific node will pick up the response message from the Q and not the other node? I would imagine just SQS is not enough, then what other AWS service can I use, to some sort of subscribe for response and get notified when data is ready to pick it up? Edited by: VSBryksin on May 17, 2019 7:52 AM Edited by: VSBryksin on May 17, 2019 7:52 AM
1
answers
0
votes
0
views
asked 3 years ago

Error when installing libcurl with nss backend (Python 3.6 64bit A. Linux)

I have an instance running Python3.6 in Amazon Linux where I need to install **pycurl with NSS ssl backend**. It needs to be with NSS because it's a **Django** app that runs backend processes with **celery** and **SQS**. When I specified, in requirements.txt,: ``` pycurl==7.43.0 --global-option="--with-nss" ``` I got some errors, so I ended up installing it through an .ebextensions file, like suggested in a stackoverflow post: ``` container_commands: 09_pycurl_reinstall: # the upgrade option is because it will run after PIP installs the requirements.txt file. # and it needs to be done with the virtual-env activated command: 'source /opt/python/run/venv/bin/activate && PYCURL_SSL_LIBRARY=nss pip3 install pycurl --global-option="--with-nss" --upgrade --no-cache-dir --compile --ignore-installed' ``` This gave me the error: ``` __main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory ``` I found in a stackoverflow thread (https://stackoverflow.com/questions/23937933/could-not-run-curl-config-errno-2-no-such-file-or-directory-when-installing) that the problem is that I needed to first install **libcurl**. As I need it to run with NSS backend, I wanted to do: ``` sudo apt libcurl4-nss-dev ``` but I can't do it because the instance is running in Amazon Linux. So, as suggested in an answer in the same thread, I instead did: ``` yum install libcurl-devel ``` The problem with this, if I'm understanding it correctly, is that it installs libcurl with OpenSSL instead of NSS (I have already set the environment variable PYCURL_SSL_LIBRARY=nss, but it seems to do nothing), so I get the following error: ``` ImportError: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (nss) ``` I know it has to be possible to do it, because I myself did it some months ago (after struggling for 3 whole weeks). My questions are: 1) Am I right in my diagnose? Is the problem that libcurl is being installed with OpenSSL backend and pycurl is being installed with NSS backend when they should both be NSS? 2) If so, how can I install libcurl with NSS backend? This seems straightforward with apt get (install libcurl4-nss-dev), but I don't know how to do it in yum, where the only available packages are: ``` libcurl-devel.x86_64 : Files needed for building applications with libcurl libcurl.i686 : A library for getting files from web servers libcurl.x86_64 : A library for getting files from web servers ``` so, there's no package that says "nss". I don't remember how I solved it the last time, maybe I didn't use yum (I'm pretty new to all of this). I've been trying different things for the past 2 weeks with no look, and it's driving me crazy. Any help would be appreciated. Thank you very much. TECHNICAL INFO: Environment running on Python3.6 64bit Amazon Linux/2.8.3. It is a Django application using celery to execute tasks with an SQS queue. Celery tasks are correctly being sent to the SQS queue, but the celery-worker doesn't work because of the problem I just described, so the messages stay in the queue (celery-beat seems to be working fine). Edited by: jaumeF on May 8, 2019 4:02 AM grammar
3
answers
0
votes
1
views
asked 3 years ago

Can't send messages over Amazon SQS

- Using Tomcat server version 8.0.53, JDK 1.8 for the server and to compile the application. Using Amazon SQS to send messages to another application, stopped working after upgrading to JDK 1.8. - Tried to simulate the environments with exact versions of the server and JDK, SQS working fine on my local server and working on one of the remote servers as well. But still not working on the server in question, despite simulating the environment exactly. - Including the produced exception stack trace: com.amazonaws.SdkClientException: Unable to execute HTTP request: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1114) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1064) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.sqs.AmazonSQSClient.doInvoke(AmazonSQSClient.java:2013) at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:1989) at com.amazonaws.services.sqs.AmazonSQSClient.executeSendMessageBatch(AmazonSQSClient.java:1698) at com.amazonaws.services.sqs.AmazonSQSClient.sendMessageBatch(AmazonSQSClient.java:1674) at com.syngenta.sqc_is.integration.util.SQSQueue.generateAndSendBatch(SQSQueue.java:178) at com.syngenta.sqc_is.integration.util.SQSQueue.SendMessagesBatch(SQSQueue.java:146) at com.syngenta.sqc_is.web.utils.core.AmsUtils.callSQSService(AmsUtils.java:105) at com.syngenta.sqc_is.service.results.impl.ResultServiceImpl.saveResults(ResultServiceImpl.java:408) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at com.sun.proxy.$Proxy34.saveResults(Unknown Source) at com.syngenta.sqc_is.web.controller.results.UploadResultsController.saveResults(UploadResultsController.java:287) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.servlet.mvc.multiaction.MultiActionController.invokeNamedMethod(MultiActionController.java:471) at org.springframework.web.servlet.mvc.multiaction.MultiActionController.handleRequestInternal(MultiActionController.java:408) at org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:153) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:919) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:851) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:953) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:855) at javax.servlet.http.HttpServlet.service(HttpServlet.java:648) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:829) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:264) at org.acegisecurity.intercept.web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:107) at org.acegisecurity.intercept.web.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:72) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.ui.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:110) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:217) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:191) at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:274) at org.acegisecurity.util.FilterChainProxy.doFilter(FilterChainProxy.java:148) at org.acegisecurity.util.FilterToBeanProxy.doFilter(FilterToBeanProxy.java:90) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.ajp.AbstractAjpProcessor.process(AbstractAjpProcessor.java:870) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1539) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1495) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1964) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:328) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:322) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1614) at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216) at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1052) at sun.security.ssl.Handshaker.process_record(Handshaker.java:987) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1072) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397) at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396) at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355) at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:132) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:359) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76) at com.amazonaws.http.conn.$Proxy31.connect(Unknown Source) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1236) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) ... 78 more Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397) at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:302) at sun.security.validator.Validator.validate(Validator.java:262) at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1596) ... 105 more Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141) at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126) at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280) at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:392) ... 111 more
1
answers
0
votes
42
views
asked 4 years ago

SQS FIFO with JMS fail to add message group ID

I am trying to setup a FIFO queue and integrate with our application using Amazon SQS Java Messaging Library. I have implemented is using the tutorial from Amazon " <https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/getting-started.html>(Getting Started with the Amazon SQS Java Messaging Library) ". For sending message to SQS FIFO queue, it says to setup the message Group ID (mandatory), which I did as follows: ``` var textMessage: TextMessage = session.createTextMessage(message) textMessage.setStringProperty("JMSXGroupID", "Default") LOG.info("Message group ID set to" + textMessage.getStringProperty("JMSXGroupID")) producer.send(textMessage) ``` But when I run the code, it's throwing an AmazonServiceException with following trace: ``` from controllers.util.MyClass$ in application-myJob-90 : Caught while putting messages in the queue: AmazonServiceException: sendMessage. RequestId: 12c4b38d-d274-57b2-8f2b-e142d0bec9b3 HTTPStatusCode: 400 AmazonErrorCode: MissingParameter javax.jms.JMSException: AmazonServiceException: sendMessage. RequestId: 12c4b38d-d274-57b2-8f2b-e142d0bec9b3 HTTPStatusCode: 400 AmazonErrorCode: MissingParameter ...class trace... Caused by: com.amazonaws.AmazonServiceException: The request must contain the parameter MessageGroupId. (Service: AmazonSQS; Status Code: 400; Error Code: MissingParameter; Request ID: 12c4b38d-d274-57b2-8f2b-e142d0bec9b3) ``` I tried to look for a solution for this but couldn't find any. Any idea what am I missing here? Feel free to ask for more details if needed. Any help is highly appreciated. Edited by: pranav8494 on Jan 10, 2018 2:36 AM
1
answers
0
votes
1
views
asked 4 years ago

Lambda to DynamoDB throughput question

IHAC who sent me the following email: > I'm working to use Lambda as our primary computation environment. So > far, that amounts to funneling data ingested via the API Gateway to > various endpoints (often similar in effect to the AWS IoT rules > engine) and using DynamoDB to store configuration data. > > The obstacle I'm currently grappling with is the throughput limits on > DynamoDB. In standard operation, we have a slow, steady stream of > requests that don't begin to approach our limits. However, on rare > occasions, I'll need to add a large data store. As things are set up, > that translates to a large number of near simultaneous requests into > DynamoDB. However, we don't have a latency requirement. Within reason, > I don't care when this operation completes, just that it does. If I > could space these requests to stay below our limits, the problem would > be solved. > > In essence, I want our burst response to distribute the load over time > as opposed to scaling up our systems. > > Initially, I was trying to setup a scheduler, a function I could call > to simply say "Try this lambda function again in X.Y minutes" with > CloudWatch Events. However, I ran into a different limitation there of > only being able to make 5 CloudWatch API requests per second. I didn't > solve the throughput issue so much as move it to a different service. > > I have a couple different ways of solving this specific problem, but > the overall scheduling design pattern was one I'm really interested > in. My initial thought is to introduce SQS between the API Gateway-fronted Lambda. That Lambda would write the payload to SQS, then use CloudWatch metrics to kick off an additional Lambda to process messages from the queue when the queue depth is greater than zero. If there is an issue writing to DynamoDB, the message simply not be removed from the queue and it can be processed later. Does that make sense, or is there a better suggestion for the customer?
1
answers
0
votes
5
views
asked 6 years ago
  • 1
  • 90 / page