How to serve a pretrained model from hugging face in sagemaker without custom script?


I have been working with an example , where I write my own custom script ( sample below) , where i am overriding the predict_fn and other functions. I have tested my model without the custom script or in the event when we don't provide our custom script, how is the model called? what does the default code for predict_fn look like, when I don't override it?

import os
import json
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

def model_fn(model_dir):
   model_dir = './pytorch_model.bin'

    tokenizer = AutoTokenizer.from_pretrained(model_dir)
    model = AutoModelForSeq2SeqLM.from_pretrained(model_dir).to(device).eval()
    model_dict = {'model':model, 'tokenizer':tokenizer}
    return model_dict

def predict_fn(input_data, model_dict):
    input = input_data.pop('inputs')
    tokenizer = model_dict['tokenizer']
    model = model_dict['model']

    input_ids = tokenizer(input, truncation=True, return_tensors="pt")
def input_fn(request_body, request_content_type):
    return  json.loads(request_body)
asked 3 months ago31 views
1 Answer

Usually you have to write your own inference code. Default predict_fn function is just returning a model. You may want to check the documentation here:

profile picture
answered 3 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions