- Newest
- Most votes
- Most comments
The SQS Extended Client is used to facilitate sending a larger payload in SQS. SQS supports messages up to 256KB. If you need to send larger payloads, we recommend storing these payloads in S3 and send the object key in the SQS message. the extended client does it for you.
It is difficult to say from the info provided what is the issue, as we do not know why it failed to get the object from S3. One option is that the consumer doesn't have the right policy in its role to access S3. If this is the case, all messages containing large payloads will fail.
I would recommend to looking at the role and if that is not the issue, check CloudTrail to see what are the errors.
Hi Uri, Thanks for responding. I should have made clear that not all messages have this problem - so its not anything to do with the bucket policy.
The exception is thrown when the extended client fails to find the content on S3 - ie the key doesn't exist. I've no idea why the key doesn't exist - i'm not getting any exceptions thrown on the publishing side of the queue.
Enabling cloudtrail - just confirms that the extended client code can't find the key on S3.
Hi
Behind stage, your are pushing a file into the s3 , knowing that the s3 is a global service and not really strong consistent . "Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions with one caveat." Ignoring the caveat, this means that a client issuing a GET following a PUT for a new object is guaranteed to get the correct result.
https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/
Based on documentation you get object asap the write is terminated since 2021.
But, My thought is that you are consuming the message to early before that the S3 key be available globally.
Try Adding some delay to your message when pushing to SQS , using DelaySeconds https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html
this will be better solution letting s3 object be replicated.
Try also to check if behind the write the object persists really in s3
Relevant content
- asked 10 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 months ago
Could it be that you are processing messages twice? The first time deletes the object and the second one doesn't fine it? Did you check in CloudTrail if the objects get deleted? Do you see the objects in the bucket later?