Sagemaker Inference for Tensorflow Base64 Input Error through API Gateway

0

When I am trying to call my Sagemaker TF endpoint using API Gateway -> Lambda Func by passing a Base 64 String (an image) I am getting an unsupported string error. I also tried with application/Json but I am still getting the error. Need Suggestion.

In Notebook Instance this is how my input looks: <CODE> input = { 'instances': [{"b64": "iV"}] }

In Lambda function I am doing this: <CODE>

instance = [{"b64": "b64string"}] pleasework=json.dumps({"instances": instance}) response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME_BASE64,ContentType='string',Accept='string' ,Body=pleasework)

ERROR: Inference Error: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from primary with message "{"error": "Unsupported Media Type: string"}".

Incase if I pass application/json I get this error:

Received client error (400) from primary with message "{ "error": "Failed to process element: 0 of 'instances' list. Error: INVALID_ARGUMENT: JSON Value: {\n "b64": "iV"\n} Type: Object is not of expected type: uint8"}"

1 Antwort
1

I would suggest testing invoking your model locally first and confirming what the input your model is expecting using the saved_model CLI. Kindly see this link: https://www.tensorflow.org/guide/saved_model#the_savedmodel_format_on_disk

Then when invoking the model confirm that instance is in the correct input format shape your model expects.

AWS
Marc
beantwortet vor 2 Jahren

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen