SQS - Extended Client - Too many ERROR [S3Dao] Failed to get the S3 object which contains the payload.

0

I'm getting lots (100's) of the following error logged by the ExtendedSQSClient

ERROR [S3Dao] Failed to get the S3 object which contains the payload.

I'm not seeing a corresponding error on the publisher side - so what is happening on the consumer side that is causing this to happen - and how do i go about fixing it ?

GraemeW
asked 2 years ago946 views
3 Answers
0

The SQS Extended Client is used to facilitate sending a larger payload in SQS. SQS supports messages up to 256KB. If you need to send larger payloads, we recommend storing these payloads in S3 and send the object key in the SQS message. the extended client does it for you.

It is difficult to say from the info provided what is the issue, as we do not know why it failed to get the object from S3. One option is that the consumer doesn't have the right policy in its role to access S3. If this is the case, all messages containing large payloads will fail.

I would recommend to looking at the role and if that is not the issue, check CloudTrail to see what are the errors.

profile pictureAWS
EXPERT
Uri
answered 2 years ago
  • Could it be that you are processing messages twice? The first time deletes the object and the second one doesn't fine it? Did you check in CloudTrail if the objects get deleted? Do you see the objects in the bucket later?

0

Hi Uri, Thanks for responding. I should have made clear that not all messages have this problem - so its not anything to do with the bucket policy.

The exception is thrown when the extended client fails to find the content on S3 - ie the key doesn't exist. I've no idea why the key doesn't exist - i'm not getting any exceptions thrown on the publishing side of the queue.

Enabling cloudtrail - just confirms that the extended client code can't find the key on S3.

GraemeW
answered 2 years ago
0

Hi

Behind stage, your are pushing a file into the s3 , knowing that the s3 is a global service and not really strong consistent . "Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions with one caveat." Ignoring the caveat, this means that a client issuing a GET following a PUT for a new object is guaranteed to get the correct result.

https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/

Based on documentation you get object asap the write is terminated since 2021.

But, My thought is that you are consuming the message to early before that the S3 key be available globally.

Try Adding some delay to your message when pushing to SQS , using DelaySeconds https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html

this will be better solution letting s3 object be replicated.

Try also to check if behind the write the object persists really in s3

answered 2 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions