Json occasionally truncated when sent to Sagemaker Inference Endpoint

0

I set up a real time inference endpoint which expects a json payload. It generally works well, but intermittently was giving an error that the payload contained an unterminated string, which prevented it from decoding when it reached the endpoint.
To test this out, I took a single sample payload and sent it to the endpoint multiple times (simulating the load the endpoint would receive in production). The same thing happened, with the an unterminated string exception occurring intermittently, at different character locations: Cloud watch logs

The payload is identical in all of these errors. Other times, it works fine.

Why is this happening and how do I fix it?

asked 4 months ago152 views
1 Answer
0

Hello,

I understand that you are concerned about JSON occasionally getting truncated when sent to the Sagemaker Inference Endpoint and would like to gather more information on the same.

Firstly, I would like to mention that this error is usually observed when there is a model error, which indicates that the error is coming from the container itself and not networking.

Further, in case you are using a custom code with a Docker container, here's what I suggest you do, when Sagemaker takes input, you most likely have some sort of "input_fn" function in your code or Docker container that is used to preprocess the input received from the InvokeEndpoint API call.

What we need to do is add debugging statements to that function that'll print to the logs. I.e., print out the input at the start of the function and the end of the function to confirm that it's correctly parsing the input/to see if it's actually receiving the full input when the function is called, and to confirm that it's correctly processing the input and returning what you expect at the end of the function.

So, by adding debugging statements through all of your code, you can more easily understand where in your code something fails and have more detailed logs for debugging in the future.

Moreover, please note that AWS Premium Support doesn't cover custom code, Docker container development, or debugging, and that all support is provided on a best-effort basis. If you have any other inquiries regarding this, feel free to reach out.

I would request that you please refer to the aforementioned documentation once, and please reach out to AWS [4] with the detailed use case so that we can assist you better.

If you have any difficulty verifying any of the above-mentioned points or if you still run into issues, please reach out to AWS Support [4] (Sagemaker) along with your issue or use case in detail, and we would be happy to assist you further.

References:


[1] Deploy models for inference - https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html

[2] https://github.com/pytube/pytube/issues/815

[3] https://aws.amazon.com/premiumsupport/

[4] Creating support cases and case management - https://docs.aws.amazon.com/awssupport/latest/user/case-management.html#creating-a-support-casehttps://docs.aws.amazon.com/awssupport/latest/user/case-management.html#creating-a-support-case

[5] https://aws.amazon.com/premiumsupport/faqs/

AWS
answered 4 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions