- 最新
- 投票最多
- 评论最多
If you are using the built-in Sagemaker algorithm for Semantic Segmentation , the content type must be "image/jpeg" for inference to accept images. For more details, see https://docs.aws.amazon.com/sagemaker/latest/dg/semantic-segmentation.html
Hi Aparna, yes I went through the link before and I tried using invoke_endpoint with ContentType as image/jpeg but it gave me the same error as before. I'll post the logs here. Exception on /invocations [POST] raise MXNetError(py_str(_LIB.MXGetLastError())) Check failed: size < (1 << 29U) RecordIO only accept record less than 2^29 bytes Stack trace returned 10 entries:
Thanks. In that case, I would make sure that the content of the image file is valid. You can add a print(len(imbytes) ) to ensure that the file is not empty or something
Hi I did what you said. Tried outputting the length of imbytes with the following code and this is the output I get -
Response "1047212"
import json import boto3
s3r = boto3.resource('s3')
def lambda_handler(event, context): # TODO implement
bucket = event["body"] key = 'image.jpg' local_file_name = '/tmp/'+key s3r.Bucket(bucket).download_file(key, local_file_name) runtime = boto3.Session().client('sagemaker-runtime') with open('/tmp/image.jpg', 'rb') as imfile: imbytes = imfile.read() return str(len(imbytes))
Hi Aparna, it turns out this was an error related to size. I was sending too big an image - 2000 by 2000 pixels. I decreased the size of the input and it worked fine. But a new problem I have encountered is that I can't decode the response['Body'].read(). When I do response['Body'].read().decode(), it throws an error - Invalid byte at position 2
相关内容
- AWS 官方已更新 2 年前
Is the image size really large? or the result really large? The error seems to suggest either the input/output is over the max size allowed by the RecordIO.
Not really @Steven_W. The image size is 1mb and it returns a 2D numpy array with dimensions - (660, 700)
HI, Can you post the code that you use in the sagemaker notebook that works and what is the server side algorithm you are using? Is it is a built-in sagemaker semantic segmentation algorithm ? Is your notebook based on existing sample notebook https://github.com/aws/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/semantic_segmentation_pascalvoc/semantic_segmentation_pascalvoc.ipynb ?
Hi, yes I did use that particular notebook.