- Newest
- Most votes
- Most comments
Hi,
When you use SageMaker batch transform, the inference is done on a separate instance from the instance that is running your code with Lambda. It means that the instance running your Lambda function does not have access to the model stored in SageMaker. To make the model accessible to your Lambda function, you need to download the model artifacts from SageMaker to your Lambda function's local file system.
To download the model artifacts, you can use the SageMaker SDK's sagemaker.Model class to create an instance of the model, and then call the model.download() method to download the model artifacts to a local directory. Here is an example:
import boto3 import sagemaker
sagemaker_session = sagemaker.Session() s3_client = boto3.client('s3')
model = sagemaker.Model( model_data=sagemaker_session.sagemaker_client.describe_model(ModelName='your-model-name')['PrimaryContainer']['ModelDataUrl'], role='your-sagemaker-role-arn') model.download('/tmp/model')
Once you have downloaded the model artifacts, you can use the model_dir path to access the model in your Lambda function. Make sure to include the path to the model artifacts in your Docker image and reference the correct location in your code.
Note that you will also need to include the AWS SDK and SageMaker SDK dependencies in your Docker image to be able to use the above code.
Hope it helps.
Relevant content
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated a year ago