Unable to access models built in Sagemaker from Docker

0

Hello, I have built a machine learning model from within Sagemaker using an inbuilt algorithm (Blazingtext). I am using a batch inference strategy in my inference code as in:

transformer = Transformer(model_name=my_model_name,
                          instance_count=1,
                          instance_type='ml.m4.xlarge',
                          output_path=output_pred_dir
                            )
transformer.transform(data=X_batch_location,
                         data_type='S3Prefix',
                         content_type='application/jsonlines',
                         split_type='Line'
                         )

Everything works perfectly when the code is run from within Sagemaker. But now I need to run the inference with Lambda. So I did this by packaging the inference code in a Docker image, saved it to ECR and deployed to Lambda. However, it is unable to do the batch transformation and gives the error:

[ERROR] ValueError: Failed to fetch model information for (my_model_name). Please ensure that the model exists. Local instance types require locally created models.

please note the model is saved and available on sagemaker; I see it from the sagemaker management console.

How can I allow the dockerised code to gain access to the model in Sagemaker?

Thank you.

1 Answer
0

Hi,

When you use SageMaker batch transform, the inference is done on a separate instance from the instance that is running your code with Lambda. It means that the instance running your Lambda function does not have access to the model stored in SageMaker. To make the model accessible to your Lambda function, you need to download the model artifacts from SageMaker to your Lambda function's local file system.

To download the model artifacts, you can use the SageMaker SDK's sagemaker.Model class to create an instance of the model, and then call the model.download() method to download the model artifacts to a local directory. Here is an example:

import boto3 import sagemaker

sagemaker_session = sagemaker.Session() s3_client = boto3.client('s3')

model = sagemaker.Model( model_data=sagemaker_session.sagemaker_client.describe_model(ModelName='your-model-name')['PrimaryContainer']['ModelDataUrl'], role='your-sagemaker-role-arn') model.download('/tmp/model')

Once you have downloaded the model artifacts, you can use the model_dir path to access the model in your Lambda function. Make sure to include the path to the model artifacts in your Docker image and reference the correct location in your code.

Note that you will also need to include the AWS SDK and SageMaker SDK dependencies in your Docker image to be able to use the above code.

Hope it helps.

AWS
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions