- Más nuevo
- Más votos
- Más comentarios
Is there any error in your CloudWatch Logs that could point to the issue?
I see you are sending a string named "FILE0032.JPG". The .predict function will make a prediction to the endpoint with the string "FILE0032.JPG" not the serialized file "FILE0032.JPG"
Kindly see how a YOLOv4 model is invoked here.
Thanks for reply. There is no error in CloudWatch logs. (Pasted below) Sorry for the long description, i thought detailed info would be helpful.
2022-06-15T11:15:21.349+05:30 Warning: MMS is using non-default JVM parameters: -XX:-UseContainerSupport AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:21.349+05:30 log4j:WARN Continuable parsing error 2 and column 16 AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:21.349+05:30 log4j:WARN Document root element "Configuration", must match DOCTYPE root "null". AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:21.349+05:30 log4j:WARN Continuable parsing error 2 and column 16 AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:21.349+05:30 log4j:WARN Document is invalid: no grammar found. AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:21.349+05:30 log4j:ERROR DOM element is - not a <log4j:configuration> element. AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:21.349+05:30 log4j:WARN No appenders could be found for logger (io.netty.util.internal.PlatformDependent0). AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:21.349+05:30 log4j:WARN Please initialize the log4j system properly. AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:21.599+05:30 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. AllTraffic/i-0ed6739cdaf7cf56a
2022-06-15T11:15:27.349+05:30 Model server started.
I tried this example, it says
"An entry_point script isn’t necessary and can be a blank file. The environment variables in the env parameter are also optional"
in the tutorial But when i tried it, it threw this error
---------------------------------------------------------------------------
ModelError Traceback (most recent call last)
<ipython-input-25-b706a4fea979> in <module>
13 for i in range(iters):
14 t0 = time.time()
---> 15 response = client.invoke_endpoint(EndpointName=optimized_predictor.endpoint_name, Body=body, ContentType=content_type)
16 t1 = time.time()
17 #convert to millis
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/botocore/client.py in _api_call(self, *args, **kwargs)
399 "%s() only accepts keyword arguments." % py_operation_name)
400 # The "self" in this scope is referring to the BaseClient.
--> 401 return self._make_api_call(operation_name, kwargs)
402
403 _api_call.__name__ = str(py_operation_name)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params)
729 error_code = parsed_response.get("Error", {}).get("Code")
730 error_class = self.exceptions.from_code(error_code)
--> 731 raise error_class(parsed_response, operation_name)
732 else:
733 return parsed_response
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from primary with message "Content type applicatoin/x-image is not supported by this framework.
Please implement input_fn to to deserialize the request data or an output_fn to
serialize the response. For more information, see the SageMaker Python SDK README.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sagemaker_inference/decoder.py", line 106, in decode
decoder = _decoder_map[content_type]
KeyError: 'applicatoin/x-image'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sagemaker_inference/transformer.py", line 128, in transform
result = self._transform_fn(self._model, input_data, content_type, accept)
File "/usr/local/lib/python3.6/site-packages/sagemaker_inference/transformer.py", line 233, in _default_transform_fn
data = self._input_fn(input_data, content_type)
File "/usr/local/lib/python3.6/site-packages/sagemaker_pytorch_serving_container/default_inference_handler.py", line 111, in default_input_fn
np_array = decoder.decode(input_data, content_type)
File "/usr/local/lib/python3.6/site-packages/sagemaker_inference/decoder.py", line 109, in decode
raise errors.UnsupportedFormatError(content_type)
sagemaker_inference.errors.UnsupportedFormatError: Content type applicatoin/x-image is not supported by this framework.
Please implement input_fn to to deserialize the request data or an output_fn to
serialize the response. For more information, see the SageMaker Python SDK README.
". See https://ap-south-1.console.aws.amazon.com/cloudwatch/home?region=ap-south-1#logEventViewer:group=/aws/sagemaker/Endpoints/sagemaker-inference-pytorch-ml-c5-2022-06-15-05-44-12-970 in account 772044684908 for more information.
FYI,
torch.__version__
1.6.0
kernel
conda_pytorch_p36 (Same steps followed as mentioned in the tutorial)
Very confused on how to proceed after this? Why SageMaker is this much complex? Any kind of help would be appreciated. Thanks Marc.
Contenido relevante
- OFICIAL DE AWSActualizada hace 3 años
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace un año
Thanks Marc,Please refer the next answer column for my comment.