how does inference work in multi model endpoint in sagemaker?

0

based on the docs provided here, https://docs.aws.amazon.com/sagemaker/latest/dg/create-multi-model-endpoint.html. i created a multi model endpoint and invoked it as documented here https://docs.aws.amazon.com/sagemaker/latest/dg/invoke-multi-model-endpoint.html. I'm getting a Invalid model exception and the message => "model version is not defined" . my set up is , i have created two models , say modelOne and modelTwo.tar.gz file , and both models have their own custom script/inference.py file with following directory structure.

when we send request to a multi model endpoint, does sagemaker uncompresses the tar.gz file specified in the request, as in my case both models have save directory structure and same model.pth files inside the tar files , is it getting mixed up and not sure which one to invoke?

inference.py

import torch
import os

def model_fn(model_dir, context):
    model = Your_Model()
    with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:
        model.load_state_dict(torch.load(f))
    return model

directory structure

model.tar.gz/
|- model.pth
|- code/
  |- inference.py
  |- requirements.txt  
response = runtime_sagemaker_client.invoke_endpoint(
                        EndpointName = "my-multi-model-endpoint",
                        ContentType  = "text/csv",
                        TargetModel  = "modelOne.tar.gz",
                        Body         = body)
asked a year ago121 views
No Answers

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions